2025-10-09 09:35:52.867120 | Job console starting 2025-10-09 09:35:52.901455 | Updating git repos 2025-10-09 09:35:52.977929 | Cloning repos into workspace 2025-10-09 09:35:53.204802 | Restoring repo states 2025-10-09 09:35:53.237196 | Merging changes 2025-10-09 09:35:53.237216 | Checking out repos 2025-10-09 09:35:53.628066 | Preparing playbooks 2025-10-09 09:35:54.225133 | Running Ansible setup 2025-10-09 09:35:59.303728 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-10-09 09:36:00.052407 | 2025-10-09 09:36:00.052582 | PLAY [Base pre] 2025-10-09 09:36:00.069387 | 2025-10-09 09:36:00.069522 | TASK [Setup log path fact] 2025-10-09 09:36:00.099344 | orchestrator | ok 2025-10-09 09:36:00.116782 | 2025-10-09 09:36:00.116930 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-10-09 09:36:00.158994 | orchestrator | ok 2025-10-09 09:36:00.171458 | 2025-10-09 09:36:00.171614 | TASK [emit-job-header : Print job information] 2025-10-09 09:36:00.228749 | # Job Information 2025-10-09 09:36:00.229016 | Ansible Version: 2.16.14 2025-10-09 09:36:00.229076 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-10-09 09:36:00.229135 | Pipeline: post 2025-10-09 09:36:00.229174 | Executor: 521e9411259a 2025-10-09 09:36:00.229208 | Triggered by: https://github.com/osism/testbed/commit/cbb11947ce2601da075b0f7ece9477f2cf45eae4 2025-10-09 09:36:00.229244 | Event ID: 5385c752-a4f3-11f0-8ad9-0c0a6becd823 2025-10-09 09:36:00.238651 | 2025-10-09 09:36:00.238791 | LOOP [emit-job-header : Print node information] 2025-10-09 09:36:00.377209 | orchestrator | ok: 2025-10-09 09:36:00.377420 | orchestrator | # Node Information 2025-10-09 09:36:00.377464 | orchestrator | Inventory Hostname: orchestrator 2025-10-09 09:36:00.377499 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-10-09 09:36:00.377528 | orchestrator | Username: zuul-testbed05 2025-10-09 09:36:00.377596 | orchestrator | Distro: Debian 12.12 2025-10-09 09:36:00.377632 | orchestrator | Provider: static-testbed 2025-10-09 09:36:00.377661 | orchestrator | Region: 2025-10-09 09:36:00.377690 | orchestrator | Label: testbed-orchestrator 2025-10-09 09:36:00.377717 | orchestrator | Product Name: OpenStack Nova 2025-10-09 09:36:00.377744 | orchestrator | Interface IP: 81.163.193.140 2025-10-09 09:36:00.404266 | 2025-10-09 09:36:00.404413 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-10-09 09:36:00.914681 | orchestrator -> localhost | changed 2025-10-09 09:36:00.931784 | 2025-10-09 09:36:00.931951 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-10-09 09:36:01.990088 | orchestrator -> localhost | changed 2025-10-09 09:36:02.016575 | 2025-10-09 09:36:02.016708 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-10-09 09:36:02.294469 | orchestrator -> localhost | ok 2025-10-09 09:36:02.301670 | 2025-10-09 09:36:02.301792 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-10-09 09:36:02.333806 | orchestrator | ok 2025-10-09 09:36:02.351564 | orchestrator | included: /var/lib/zuul/builds/17032c6985c94374a7f93c30fddd8d2c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-10-09 09:36:02.359738 | 2025-10-09 09:36:02.359852 | TASK [add-build-sshkey : Create Temp SSH key] 2025-10-09 09:36:05.265256 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-10-09 09:36:05.265747 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/17032c6985c94374a7f93c30fddd8d2c/work/17032c6985c94374a7f93c30fddd8d2c_id_rsa 2025-10-09 09:36:05.265856 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/17032c6985c94374a7f93c30fddd8d2c/work/17032c6985c94374a7f93c30fddd8d2c_id_rsa.pub 2025-10-09 09:36:05.265931 | orchestrator -> localhost | The key fingerprint is: 2025-10-09 09:36:05.266007 | orchestrator -> localhost | SHA256:oO8zSGyuauU837hrswZZxOR5xRmjG0aH94F3nsik6dA zuul-build-sshkey 2025-10-09 09:36:05.266071 | orchestrator -> localhost | The key's randomart image is: 2025-10-09 09:36:05.266155 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-10-09 09:36:05.266218 | orchestrator -> localhost | | o. .o=+ | 2025-10-09 09:36:05.266280 | orchestrator -> localhost | | .oo.+=.+ . | 2025-10-09 09:36:05.266338 | orchestrator -> localhost | | .o.=o B = . | 2025-10-09 09:36:05.266395 | orchestrator -> localhost | | ooooE + o | 2025-10-09 09:36:05.266450 | orchestrator -> localhost | | .+ .S | 2025-10-09 09:36:05.266524 | orchestrator -> localhost | | ++. . | 2025-10-09 09:36:05.266606 | orchestrator -> localhost | | ++... | 2025-10-09 09:36:05.266666 | orchestrator -> localhost | | . +o== | 2025-10-09 09:36:05.266726 | orchestrator -> localhost | |o...=**+ | 2025-10-09 09:36:05.266785 | orchestrator -> localhost | +----[SHA256]-----+ 2025-10-09 09:36:05.267005 | orchestrator -> localhost | ok: Runtime: 0:00:02.399149 2025-10-09 09:36:05.284679 | 2025-10-09 09:36:05.284853 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-10-09 09:36:05.317439 | orchestrator | ok 2025-10-09 09:36:05.328501 | orchestrator | included: /var/lib/zuul/builds/17032c6985c94374a7f93c30fddd8d2c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-10-09 09:36:05.337826 | 2025-10-09 09:36:05.337945 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-10-09 09:36:05.362282 | orchestrator | skipping: Conditional result was False 2025-10-09 09:36:05.370079 | 2025-10-09 09:36:05.370181 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-10-09 09:36:05.955237 | orchestrator | changed 2025-10-09 09:36:05.965022 | 2025-10-09 09:36:05.965158 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-10-09 09:36:06.236357 | orchestrator | ok 2025-10-09 09:36:06.245833 | 2025-10-09 09:36:06.245957 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-10-09 09:36:06.674209 | orchestrator | ok 2025-10-09 09:36:06.685175 | 2025-10-09 09:36:06.685321 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-10-09 09:36:07.081107 | orchestrator | ok 2025-10-09 09:36:07.090214 | 2025-10-09 09:36:07.090338 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-10-09 09:36:07.114608 | orchestrator | skipping: Conditional result was False 2025-10-09 09:36:07.123515 | 2025-10-09 09:36:07.123693 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-10-09 09:36:07.566509 | orchestrator -> localhost | changed 2025-10-09 09:36:07.589424 | 2025-10-09 09:36:07.589611 | TASK [add-build-sshkey : Add back temp key] 2025-10-09 09:36:07.938724 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/17032c6985c94374a7f93c30fddd8d2c/work/17032c6985c94374a7f93c30fddd8d2c_id_rsa (zuul-build-sshkey) 2025-10-09 09:36:07.939317 | orchestrator -> localhost | ok: Runtime: 0:00:00.018658 2025-10-09 09:36:07.953895 | 2025-10-09 09:36:07.954039 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-10-09 09:36:08.352711 | orchestrator | ok 2025-10-09 09:36:08.360504 | 2025-10-09 09:36:08.360635 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-10-09 09:36:08.394822 | orchestrator | skipping: Conditional result was False 2025-10-09 09:36:08.445482 | 2025-10-09 09:36:08.445644 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-10-09 09:36:08.841409 | orchestrator | ok 2025-10-09 09:36:08.852817 | 2025-10-09 09:36:08.852939 | TASK [validate-host : Define zuul_info_dir fact] 2025-10-09 09:36:08.892542 | orchestrator | ok 2025-10-09 09:36:08.899750 | 2025-10-09 09:36:08.899851 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-10-09 09:36:09.224884 | orchestrator -> localhost | ok 2025-10-09 09:36:09.238744 | 2025-10-09 09:36:09.238919 | TASK [validate-host : Collect information about the host] 2025-10-09 09:36:10.410283 | orchestrator | ok 2025-10-09 09:36:10.425126 | 2025-10-09 09:36:10.425247 | TASK [validate-host : Sanitize hostname] 2025-10-09 09:36:10.499723 | orchestrator | ok 2025-10-09 09:36:10.507532 | 2025-10-09 09:36:10.507690 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-10-09 09:36:11.042643 | orchestrator -> localhost | changed 2025-10-09 09:36:11.056862 | 2025-10-09 09:36:11.057012 | TASK [validate-host : Collect information about zuul worker] 2025-10-09 09:36:11.481405 | orchestrator | ok 2025-10-09 09:36:11.489132 | 2025-10-09 09:36:11.489264 | TASK [validate-host : Write out all zuul information for each host] 2025-10-09 09:36:12.070357 | orchestrator -> localhost | changed 2025-10-09 09:36:12.083659 | 2025-10-09 09:36:12.083781 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-10-09 09:36:12.356507 | orchestrator | ok 2025-10-09 09:36:12.369437 | 2025-10-09 09:36:12.369776 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-10-09 09:36:51.326166 | orchestrator | changed: 2025-10-09 09:36:51.327614 | orchestrator | .d..t...... src/ 2025-10-09 09:36:51.327674 | orchestrator | .d..t...... src/github.com/ 2025-10-09 09:36:51.327700 | orchestrator | .d..t...... src/github.com/osism/ 2025-10-09 09:36:51.327723 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-10-09 09:36:51.327744 | orchestrator | RedHat.yml 2025-10-09 09:36:51.340877 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-10-09 09:36:51.340895 | orchestrator | RedHat.yml 2025-10-09 09:36:51.340946 | orchestrator | = 1.53.0"... 2025-10-09 09:37:02.269732 | orchestrator | 09:37:02.269 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-10-09 09:37:02.424273 | orchestrator | 09:37:02.424 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-10-09 09:37:02.865115 | orchestrator | 09:37:02.864 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-10-09 09:37:03.128710 | orchestrator | 09:37:03.128 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-10-09 09:37:03.768287 | orchestrator | 09:37:03.768 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-10-09 09:37:03.838892 | orchestrator | 09:37:03.838 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-10-09 09:37:04.256948 | orchestrator | 09:37:04.256 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-10-09 09:37:04.257040 | orchestrator | 09:37:04.256 STDOUT terraform: Providers are signed by their developers. 2025-10-09 09:37:04.257048 | orchestrator | 09:37:04.256 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-10-09 09:37:04.257056 | orchestrator | 09:37:04.256 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-10-09 09:37:04.257153 | orchestrator | 09:37:04.257 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-10-09 09:37:04.258102 | orchestrator | 09:37:04.257 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-10-09 09:37:04.258150 | orchestrator | 09:37:04.257 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-10-09 09:37:04.258157 | orchestrator | 09:37:04.257 STDOUT terraform: you run "tofu init" in the future. 2025-10-09 09:37:04.258162 | orchestrator | 09:37:04.257 STDOUT terraform: OpenTofu has been successfully initialized! 2025-10-09 09:37:04.258167 | orchestrator | 09:37:04.257 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-10-09 09:37:04.258171 | orchestrator | 09:37:04.257 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-10-09 09:37:04.258176 | orchestrator | 09:37:04.257 STDOUT terraform: should now work. 2025-10-09 09:37:04.258181 | orchestrator | 09:37:04.257 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-10-09 09:37:04.258185 | orchestrator | 09:37:04.257 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-10-09 09:37:04.258190 | orchestrator | 09:37:04.257 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-10-09 09:37:04.568575 | orchestrator | 09:37:04.568 STDOUT terraform: Created and switched to workspace "ci"! 2025-10-09 09:37:04.568649 | orchestrator | 09:37:04.568 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-10-09 09:37:04.568756 | orchestrator | 09:37:04.568 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-10-09 09:37:04.568784 | orchestrator | 09:37:04.568 STDOUT terraform: for this configuration. 2025-10-09 09:37:04.808943 | orchestrator | 09:37:04.806 STDOUT terraform: ci.auto.tfvars 2025-10-09 09:37:04.979942 | orchestrator | 09:37:04.979 STDOUT terraform: default_custom.tf 2025-10-09 09:37:06.086911 | orchestrator | 09:37:06.086 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-10-09 09:37:06.621671 | orchestrator | 09:37:06.620 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-10-09 09:37:06.815485 | orchestrator | 09:37:06.815 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-10-09 09:37:06.815565 | orchestrator | 09:37:06.815 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-10-09 09:37:06.815572 | orchestrator | 09:37:06.815 STDOUT terraform:  + create 2025-10-09 09:37:06.815626 | orchestrator | 09:37:06.815 STDOUT terraform:  <= read (data resources) 2025-10-09 09:37:06.815693 | orchestrator | 09:37:06.815 STDOUT terraform: OpenTofu will perform the following actions: 2025-10-09 09:37:06.815874 | orchestrator | 09:37:06.815 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-10-09 09:37:06.815990 | orchestrator | 09:37:06.815 STDOUT terraform:  # (config refers to values not yet known) 2025-10-09 09:37:06.816075 | orchestrator | 09:37:06.815 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-10-09 09:37:06.816137 | orchestrator | 09:37:06.816 STDOUT terraform:  + checksum = (known after apply) 2025-10-09 09:37:06.816206 | orchestrator | 09:37:06.816 STDOUT terraform:  + created_at = (known after apply) 2025-10-09 09:37:06.816285 | orchestrator | 09:37:06.816 STDOUT terraform:  + file = (known after apply) 2025-10-09 09:37:06.816356 | orchestrator | 09:37:06.816 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.816462 | orchestrator | 09:37:06.816 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.816493 | orchestrator | 09:37:06.816 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-10-09 09:37:06.816558 | orchestrator | 09:37:06.816 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-10-09 09:37:06.816618 | orchestrator | 09:37:06.816 STDOUT terraform:  + most_recent = true 2025-10-09 09:37:06.816701 | orchestrator | 09:37:06.816 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.816761 | orchestrator | 09:37:06.816 STDOUT terraform:  + protected = (known after apply) 2025-10-09 09:37:06.816847 | orchestrator | 09:37:06.816 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.816915 | orchestrator | 09:37:06.816 STDOUT terraform:  + schema = (known after apply) 2025-10-09 09:37:06.816970 | orchestrator | 09:37:06.816 STDOUT terraform:  + size_bytes = (known after apply) 2025-10-09 09:37:06.817088 | orchestrator | 09:37:06.816 STDOUT terraform:  + tags = (known after apply) 2025-10-09 09:37:06.817152 | orchestrator | 09:37:06.817 STDOUT terraform:  + updated_at = (known after apply) 2025-10-09 09:37:06.817193 | orchestrator | 09:37:06.817 STDOUT terraform:  } 2025-10-09 09:37:06.817304 | orchestrator | 09:37:06.817 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-10-09 09:37:06.817396 | orchestrator | 09:37:06.817 STDOUT terraform:  # (config refers to values not yet known) 2025-10-09 09:37:06.817465 | orchestrator | 09:37:06.817 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-10-09 09:37:06.817531 | orchestrator | 09:37:06.817 STDOUT terraform:  + checksum = (known after apply) 2025-10-09 09:37:06.817612 | orchestrator | 09:37:06.817 STDOUT terraform:  + created_at = (known after apply) 2025-10-09 09:37:06.817684 | orchestrator | 09:37:06.817 STDOUT terraform:  + file = (known after apply) 2025-10-09 09:37:06.817764 | orchestrator | 09:37:06.817 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.817810 | orchestrator | 09:37:06.817 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.817870 | orchestrator | 09:37:06.817 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-10-09 09:37:06.817947 | orchestrator | 09:37:06.817 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-10-09 09:37:06.818081 | orchestrator | 09:37:06.817 STDOUT terraform:  + most_recent = true 2025-10-09 09:37:06.818127 | orchestrator | 09:37:06.818 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.818250 | orchestrator | 09:37:06.818 STDOUT terraform:  + protected = (known after apply) 2025-10-09 09:37:06.818300 | orchestrator | 09:37:06.818 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.818381 | orchestrator | 09:37:06.818 STDOUT terraform:  + schema = (known after apply) 2025-10-09 09:37:06.818448 | orchestrator | 09:37:06.818 STDOUT terraform:  + size_bytes = (known after apply) 2025-10-09 09:37:06.818545 | orchestrator | 09:37:06.818 STDOUT terraform:  + tags = (known after apply) 2025-10-09 09:37:06.818612 | orchestrator | 09:37:06.818 STDOUT terraform:  + updated_at = (known after apply) 2025-10-09 09:37:06.818669 | orchestrator | 09:37:06.818 STDOUT terraform:  } 2025-10-09 09:37:06.818719 | orchestrator | 09:37:06.818 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-10-09 09:37:06.818799 | orchestrator | 09:37:06.818 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-10-09 09:37:06.818878 | orchestrator | 09:37:06.818 STDOUT terraform:  + content = (known after apply) 2025-10-09 09:37:06.818969 | orchestrator | 09:37:06.818 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-09 09:37:06.819111 | orchestrator | 09:37:06.818 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-09 09:37:06.819197 | orchestrator | 09:37:06.819 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-09 09:37:06.819285 | orchestrator | 09:37:06.819 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-09 09:37:06.819370 | orchestrator | 09:37:06.819 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-09 09:37:06.819455 | orchestrator | 09:37:06.819 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-09 09:37:06.819509 | orchestrator | 09:37:06.819 STDOUT terraform:  + directory_permission = "0777" 2025-10-09 09:37:06.819561 | orchestrator | 09:37:06.819 STDOUT terraform:  + file_permission = "0644" 2025-10-09 09:37:06.819646 | orchestrator | 09:37:06.819 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-10-09 09:37:06.819740 | orchestrator | 09:37:06.819 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.819777 | orchestrator | 09:37:06.819 STDOUT terraform:  } 2025-10-09 09:37:06.819913 | orchestrator | 09:37:06.819 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-10-09 09:37:06.819975 | orchestrator | 09:37:06.819 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-10-09 09:37:06.820085 | orchestrator | 09:37:06.819 STDOUT terraform:  + content = (known after apply) 2025-10-09 09:37:06.820175 | orchestrator | 09:37:06.820 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-09 09:37:06.820249 | orchestrator | 09:37:06.820 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-09 09:37:06.820422 | orchestrator | 09:37:06.820 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-09 09:37:06.820493 | orchestrator | 09:37:06.820 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-09 09:37:06.820586 | orchestrator | 09:37:06.820 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-09 09:37:06.820698 | orchestrator | 09:37:06.820 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-09 09:37:06.820755 | orchestrator | 09:37:06.820 STDOUT terraform:  + directory_permission = "0777" 2025-10-09 09:37:06.820815 | orchestrator | 09:37:06.820 STDOUT terraform:  + file_permission = "0644" 2025-10-09 09:37:06.820937 | orchestrator | 09:37:06.820 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-10-09 09:37:06.821035 | orchestrator | 09:37:06.820 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.821067 | orchestrator | 09:37:06.821 STDOUT terraform:  } 2025-10-09 09:37:06.821146 | orchestrator | 09:37:06.821 STDOUT terraform:  # local_file.inventory will be created 2025-10-09 09:37:06.821192 | orchestrator | 09:37:06.821 STDOUT terraform:  + resource "local_file" "inventory" { 2025-10-09 09:37:06.821267 | orchestrator | 09:37:06.821 STDOUT terraform:  + content = (known after apply) 2025-10-09 09:37:06.821357 | orchestrator | 09:37:06.821 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-09 09:37:06.821442 | orchestrator | 09:37:06.821 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-09 09:37:06.821528 | orchestrator | 09:37:06.821 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-09 09:37:06.821613 | orchestrator | 09:37:06.821 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-09 09:37:06.821715 | orchestrator | 09:37:06.821 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-09 09:37:06.821815 | orchestrator | 09:37:06.821 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-09 09:37:06.821872 | orchestrator | 09:37:06.821 STDOUT terraform:  + directory_permission = "0777" 2025-10-09 09:37:06.821930 | orchestrator | 09:37:06.821 STDOUT terraform:  + file_permission = "0644" 2025-10-09 09:37:06.821991 | orchestrator | 09:37:06.821 STDOUT terraform:  + filename = "inventory.ci" 2025-10-09 09:37:06.822159 | orchestrator | 09:37:06.821 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.822185 | orchestrator | 09:37:06.822 STDOUT terraform:  } 2025-10-09 09:37:06.822256 | orchestrator | 09:37:06.822 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-10-09 09:37:06.822338 | orchestrator | 09:37:06.822 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-10-09 09:37:06.822404 | orchestrator | 09:37:06.822 STDOUT terraform:  + content = (sensitive value) 2025-10-09 09:37:06.822499 | orchestrator | 09:37:06.822 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-09 09:37:06.822616 | orchestrator | 09:37:06.822 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-09 09:37:06.822734 | orchestrator | 09:37:06.822 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-09 09:37:06.822811 | orchestrator | 09:37:06.822 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-09 09:37:06.822923 | orchestrator | 09:37:06.822 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-09 09:37:06.823005 | orchestrator | 09:37:06.822 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-09 09:37:06.823077 | orchestrator | 09:37:06.823 STDOUT terraform:  + directory_permission = "0700" 2025-10-09 09:37:06.823130 | orchestrator | 09:37:06.823 STDOUT terraform:  + file_permission = "0600" 2025-10-09 09:37:06.823206 | orchestrator | 09:37:06.823 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-10-09 09:37:06.823291 | orchestrator | 09:37:06.823 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.823318 | orchestrator | 09:37:06.823 STDOUT terraform:  } 2025-10-09 09:37:06.823429 | orchestrator | 09:37:06.823 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-10-09 09:37:06.823481 | orchestrator | 09:37:06.823 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-10-09 09:37:06.823535 | orchestrator | 09:37:06.823 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.823558 | orchestrator | 09:37:06.823 STDOUT terraform:  } 2025-10-09 09:37:06.823674 | orchestrator | 09:37:06.823 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-10-09 09:37:06.823800 | orchestrator | 09:37:06.823 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-10-09 09:37:06.823896 | orchestrator | 09:37:06.823 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.823974 | orchestrator | 09:37:06.823 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.824103 | orchestrator | 09:37:06.823 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.824186 | orchestrator | 09:37:06.824 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.824269 | orchestrator | 09:37:06.824 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.824404 | orchestrator | 09:37:06.824 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-10-09 09:37:06.824488 | orchestrator | 09:37:06.824 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.824534 | orchestrator | 09:37:06.824 STDOUT terraform:  + size = 80 2025-10-09 09:37:06.824568 | orchestrator | 09:37:06.824 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.824620 | orchestrator | 09:37:06.824 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.824627 | orchestrator | 09:37:06.824 STDOUT terraform:  } 2025-10-09 09:37:06.824728 | orchestrator | 09:37:06.824 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-10-09 09:37:06.824812 | orchestrator | 09:37:06.824 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:06.824930 | orchestrator | 09:37:06.824 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.824974 | orchestrator | 09:37:06.824 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.825069 | orchestrator | 09:37:06.824 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.825137 | orchestrator | 09:37:06.825 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.825207 | orchestrator | 09:37:06.825 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.825305 | orchestrator | 09:37:06.825 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-10-09 09:37:06.825377 | orchestrator | 09:37:06.825 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.825414 | orchestrator | 09:37:06.825 STDOUT terraform:  + size = 80 2025-10-09 09:37:06.825457 | orchestrator | 09:37:06.825 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.825514 | orchestrator | 09:37:06.825 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.825538 | orchestrator | 09:37:06.825 STDOUT terraform:  } 2025-10-09 09:37:06.825791 | orchestrator | 09:37:06.825 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-10-09 09:37:06.825877 | orchestrator | 09:37:06.825 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:06.825969 | orchestrator | 09:37:06.825 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.826038 | orchestrator | 09:37:06.825 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.826159 | orchestrator | 09:37:06.826 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.826242 | orchestrator | 09:37:06.826 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.826309 | orchestrator | 09:37:06.826 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.826388 | orchestrator | 09:37:06.826 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-10-09 09:37:06.826467 | orchestrator | 09:37:06.826 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.826510 | orchestrator | 09:37:06.826 STDOUT terraform:  + size = 80 2025-10-09 09:37:06.826552 | orchestrator | 09:37:06.826 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.826604 | orchestrator | 09:37:06.826 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.826642 | orchestrator | 09:37:06.826 STDOUT terraform:  } 2025-10-09 09:37:06.826735 | orchestrator | 09:37:06.826 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-10-09 09:37:06.826825 | orchestrator | 09:37:06.826 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:06.826893 | orchestrator | 09:37:06.826 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.826960 | orchestrator | 09:37:06.826 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.827070 | orchestrator | 09:37:06.826 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.827154 | orchestrator | 09:37:06.827 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.827240 | orchestrator | 09:37:06.827 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.827337 | orchestrator | 09:37:06.827 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-10-09 09:37:06.827410 | orchestrator | 09:37:06.827 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.827450 | orchestrator | 09:37:06.827 STDOUT terraform:  + size = 80 2025-10-09 09:37:06.827521 | orchestrator | 09:37:06.827 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.827562 | orchestrator | 09:37:06.827 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.827569 | orchestrator | 09:37:06.827 STDOUT terraform:  } 2025-10-09 09:37:06.827651 | orchestrator | 09:37:06.827 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-10-09 09:37:06.827739 | orchestrator | 09:37:06.827 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:06.827794 | orchestrator | 09:37:06.827 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.827832 | orchestrator | 09:37:06.827 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.827892 | orchestrator | 09:37:06.827 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.827950 | orchestrator | 09:37:06.827 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.828033 | orchestrator | 09:37:06.827 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.828115 | orchestrator | 09:37:06.828 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-10-09 09:37:06.828175 | orchestrator | 09:37:06.828 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.828211 | orchestrator | 09:37:06.828 STDOUT terraform:  + size = 80 2025-10-09 09:37:06.828251 | orchestrator | 09:37:06.828 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.828296 | orchestrator | 09:37:06.828 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.828303 | orchestrator | 09:37:06.828 STDOUT terraform:  } 2025-10-09 09:37:06.828388 | orchestrator | 09:37:06.828 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-10-09 09:37:06.828457 | orchestrator | 09:37:06.828 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:06.828545 | orchestrator | 09:37:06.828 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.828581 | orchestrator | 09:37:06.828 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.828642 | orchestrator | 09:37:06.828 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.828719 | orchestrator | 09:37:06.828 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.828804 | orchestrator | 09:37:06.828 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.828878 | orchestrator | 09:37:06.828 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-10-09 09:37:06.828942 | orchestrator | 09:37:06.828 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.828973 | orchestrator | 09:37:06.828 STDOUT terraform:  + size = 80 2025-10-09 09:37:06.829019 | orchestrator | 09:37:06.828 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.829059 | orchestrator | 09:37:06.829 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.829091 | orchestrator | 09:37:06.829 STDOUT terraform:  } 2025-10-09 09:37:06.829178 | orchestrator | 09:37:06.829 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-10-09 09:37:06.829255 | orchestrator | 09:37:06.829 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:06.829335 | orchestrator | 09:37:06.829 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.829375 | orchestrator | 09:37:06.829 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.829432 | orchestrator | 09:37:06.829 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.829500 | orchestrator | 09:37:06.829 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.829552 | orchestrator | 09:37:06.829 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.829633 | orchestrator | 09:37:06.829 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-10-09 09:37:06.829698 | orchestrator | 09:37:06.829 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.829732 | orchestrator | 09:37:06.829 STDOUT terraform:  + size = 80 2025-10-09 09:37:06.829767 | orchestrator | 09:37:06.829 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.829807 | orchestrator | 09:37:06.829 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.829829 | orchestrator | 09:37:06.829 STDOUT terraform:  } 2025-10-09 09:37:06.829931 | orchestrator | 09:37:06.829 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-10-09 09:37:06.829993 | orchestrator | 09:37:06.829 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.830112 | orchestrator | 09:37:06.829 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.830159 | orchestrator | 09:37:06.830 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.830215 | orchestrator | 09:37:06.830 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.830276 | orchestrator | 09:37:06.830 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.830337 | orchestrator | 09:37:06.830 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-10-09 09:37:06.830393 | orchestrator | 09:37:06.830 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.830436 | orchestrator | 09:37:06.830 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.830473 | orchestrator | 09:37:06.830 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.830540 | orchestrator | 09:37:06.830 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.830566 | orchestrator | 09:37:06.830 STDOUT terraform:  } 2025-10-09 09:37:06.830643 | orchestrator | 09:37:06.830 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-10-09 09:37:06.830707 | orchestrator | 09:37:06.830 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.830789 | orchestrator | 09:37:06.830 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.830825 | orchestrator | 09:37:06.830 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.830917 | orchestrator | 09:37:06.830 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.830992 | orchestrator | 09:37:06.830 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.831084 | orchestrator | 09:37:06.830 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-10-09 09:37:06.831162 | orchestrator | 09:37:06.831 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.831212 | orchestrator | 09:37:06.831 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.831271 | orchestrator | 09:37:06.831 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.831316 | orchestrator | 09:37:06.831 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.831337 | orchestrator | 09:37:06.831 STDOUT terraform:  } 2025-10-09 09:37:06.831406 | orchestrator | 09:37:06.831 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-10-09 09:37:06.831473 | orchestrator | 09:37:06.831 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.831534 | orchestrator | 09:37:06.831 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.831563 | orchestrator | 09:37:06.831 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.831624 | orchestrator | 09:37:06.831 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.831684 | orchestrator | 09:37:06.831 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.831748 | orchestrator | 09:37:06.831 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-10-09 09:37:06.834196 | orchestrator | 09:37:06.831 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.834280 | orchestrator | 09:37:06.832 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.834289 | orchestrator | 09:37:06.832 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.834296 | orchestrator | 09:37:06.832 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.834303 | orchestrator | 09:37:06.832 STDOUT terraform:  } 2025-10-09 09:37:06.834309 | orchestrator | 09:37:06.832 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-10-09 09:37:06.834333 | orchestrator | 09:37:06.832 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.834339 | orchestrator | 09:37:06.832 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.834345 | orchestrator | 09:37:06.832 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.834351 | orchestrator | 09:37:06.832 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.834356 | orchestrator | 09:37:06.832 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.834362 | orchestrator | 09:37:06.833 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-10-09 09:37:06.834368 | orchestrator | 09:37:06.833 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.834373 | orchestrator | 09:37:06.833 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.834378 | orchestrator | 09:37:06.833 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.834384 | orchestrator | 09:37:06.833 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.834389 | orchestrator | 09:37:06.833 STDOUT terraform:  } 2025-10-09 09:37:06.834403 | orchestrator | 09:37:06.833 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-10-09 09:37:06.834409 | orchestrator | 09:37:06.833 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.834415 | orchestrator | 09:37:06.833 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.834420 | orchestrator | 09:37:06.833 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.834425 | orchestrator | 09:37:06.833 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.834431 | orchestrator | 09:37:06.833 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.834436 | orchestrator | 09:37:06.833 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-10-09 09:37:06.834441 | orchestrator | 09:37:06.833 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.834447 | orchestrator | 09:37:06.833 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.834452 | orchestrator | 09:37:06.833 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.834457 | orchestrator | 09:37:06.833 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.834462 | orchestrator | 09:37:06.833 STDOUT terraform:  } 2025-10-09 09:37:06.834468 | orchestrator | 09:37:06.833 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-10-09 09:37:06.834473 | orchestrator | 09:37:06.833 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.834478 | orchestrator | 09:37:06.833 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.834484 | orchestrator | 09:37:06.833 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.834489 | orchestrator | 09:37:06.833 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.834494 | orchestrator | 09:37:06.833 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.834505 | orchestrator | 09:37:06.833 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-10-09 09:37:06.834510 | orchestrator | 09:37:06.833 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.834533 | orchestrator | 09:37:06.833 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.834539 | orchestrator | 09:37:06.833 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.834544 | orchestrator | 09:37:06.833 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.834549 | orchestrator | 09:37:06.833 STDOUT terraform:  } 2025-10-09 09:37:06.834555 | orchestrator | 09:37:06.833 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-10-09 09:37:06.834560 | orchestrator | 09:37:06.833 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.834565 | orchestrator | 09:37:06.833 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.834571 | orchestrator | 09:37:06.833 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.834576 | orchestrator | 09:37:06.833 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.834581 | orchestrator | 09:37:06.833 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.834586 | orchestrator | 09:37:06.833 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-10-09 09:37:06.834592 | orchestrator | 09:37:06.834 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.834597 | orchestrator | 09:37:06.834 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.834602 | orchestrator | 09:37:06.834 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.834608 | orchestrator | 09:37:06.834 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.834613 | orchestrator | 09:37:06.834 STDOUT terraform:  } 2025-10-09 09:37:06.834622 | orchestrator | 09:37:06.834 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-10-09 09:37:06.834627 | orchestrator | 09:37:06.834 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.834633 | orchestrator | 09:37:06.834 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.834638 | orchestrator | 09:37:06.834 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.834643 | orchestrator | 09:37:06.834 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.834649 | orchestrator | 09:37:06.834 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.834654 | orchestrator | 09:37:06.834 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-10-09 09:37:06.834659 | orchestrator | 09:37:06.834 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.834664 | orchestrator | 09:37:06.834 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.834669 | orchestrator | 09:37:06.834 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.834675 | orchestrator | 09:37:06.834 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.834680 | orchestrator | 09:37:06.834 STDOUT terraform:  } 2025-10-09 09:37:06.834690 | orchestrator | 09:37:06.834 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-10-09 09:37:06.834698 | orchestrator | 09:37:06.834 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:06.834704 | orchestrator | 09:37:06.834 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:06.834709 | orchestrator | 09:37:06.834 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.835004 | orchestrator | 09:37:06.834 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.835112 | orchestrator | 09:37:06.835 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:06.835165 | orchestrator | 09:37:06.835 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-10-09 09:37:06.835204 | orchestrator | 09:37:06.835 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.835234 | orchestrator | 09:37:06.835 STDOUT terraform:  + size = 20 2025-10-09 09:37:06.835262 | orchestrator | 09:37:06.835 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:06.835288 | orchestrator | 09:37:06.835 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:06.835298 | orchestrator | 09:37:06.835 STDOUT terraform:  } 2025-10-09 09:37:06.835349 | orchestrator | 09:37:06.835 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-10-09 09:37:06.835397 | orchestrator | 09:37:06.835 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-10-09 09:37:06.835434 | orchestrator | 09:37:06.835 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:06.835473 | orchestrator | 09:37:06.835 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:06.835510 | orchestrator | 09:37:06.835 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:06.835550 | orchestrator | 09:37:06.835 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.835576 | orchestrator | 09:37:06.835 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.835602 | orchestrator | 09:37:06.835 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:06.835640 | orchestrator | 09:37:06.835 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:06.835679 | orchestrator | 09:37:06.835 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:06.835712 | orchestrator | 09:37:06.835 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-10-09 09:37:06.835742 | orchestrator | 09:37:06.835 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:06.835779 | orchestrator | 09:37:06.835 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:06.835819 | orchestrator | 09:37:06.835 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.835855 | orchestrator | 09:37:06.835 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.835893 | orchestrator | 09:37:06.835 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:06.835935 | orchestrator | 09:37:06.835 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:06.835952 | orchestrator | 09:37:06.835 STDOUT terraform:  + name = "testbed-manager" 2025-10-09 09:37:06.835983 | orchestrator | 09:37:06.835 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:06.836038 | orchestrator | 09:37:06.835 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.836076 | orchestrator | 09:37:06.836 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:06.836104 | orchestrator | 09:37:06.836 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:06.836143 | orchestrator | 09:37:06.836 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:06.836176 | orchestrator | 09:37:06.836 STDOUT terraform:  + user_data = (sensitive value) 2025-10-09 09:37:06.836202 | orchestrator | 09:37:06.836 STDOUT terraform:  + block_device { 2025-10-09 09:37:06.836225 | orchestrator | 09:37:06.836 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:06.836259 | orchestrator | 09:37:06.836 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:06.836301 | orchestrator | 09:37:06.836 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:06.836329 | orchestrator | 09:37:06.836 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:06.836369 | orchestrator | 09:37:06.836 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:06.836421 | orchestrator | 09:37:06.836 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.836433 | orchestrator | 09:37:06.836 STDOUT terraform:  } 2025-10-09 09:37:06.836470 | orchestrator | 09:37:06.836 STDOUT terraform:  + network { 2025-10-09 09:37:06.836481 | orchestrator | 09:37:06.836 STDOUT terraform:  + access_network = false 2025-10-09 09:37:06.836537 | orchestrator | 09:37:06.836 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:06.836578 | orchestrator | 09:37:06.836 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:06.836620 | orchestrator | 09:37:06.836 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:06.841083 | orchestrator | 09:37:06.836 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.841304 | orchestrator | 09:37:06.841 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:06.841354 | orchestrator | 09:37:06.841 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.841440 | orchestrator | 09:37:06.841 STDOUT terraform:  } 2025-10-09 09:37:06.841504 | orchestrator | 09:37:06.841 STDOUT terraform:  } 2025-10-09 09:37:06.841730 | orchestrator | 09:37:06.841 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-10-09 09:37:06.841946 | orchestrator | 09:37:06.841 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:06.843431 | orchestrator | 09:37:06.841 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:06.843627 | orchestrator | 09:37:06.843 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:06.843867 | orchestrator | 09:37:06.843 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:06.843905 | orchestrator | 09:37:06.843 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.844003 | orchestrator | 09:37:06.843 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.844066 | orchestrator | 09:37:06.843 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:06.844083 | orchestrator | 09:37:06.844 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:06.844478 | orchestrator | 09:37:06.844 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:06.844586 | orchestrator | 09:37:06.844 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:06.844740 | orchestrator | 09:37:06.844 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:06.844745 | orchestrator | 09:37:06.844 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:06.844749 | orchestrator | 09:37:06.844 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.844781 | orchestrator | 09:37:06.844 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.844844 | orchestrator | 09:37:06.844 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:06.845053 | orchestrator | 09:37:06.844 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:06.845141 | orchestrator | 09:37:06.844 STDOUT terraform:  + name = "testbed-node-0" 2025-10-09 09:37:06.845157 | orchestrator | 09:37:06.845 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:06.845259 | orchestrator | 09:37:06.845 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.845409 | orchestrator | 09:37:06.845 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:06.845496 | orchestrator | 09:37:06.845 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:06.845677 | orchestrator | 09:37:06.845 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:06.845795 | orchestrator | 09:37:06.845 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:06.845899 | orchestrator | 09:37:06.845 STDOUT terraform:  + block_device { 2025-10-09 09:37:06.845905 | orchestrator | 09:37:06.845 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:06.846136 | orchestrator | 09:37:06.845 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:06.846288 | orchestrator | 09:37:06.846 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:06.846419 | orchestrator | 09:37:06.846 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:06.846495 | orchestrator | 09:37:06.846 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:06.846539 | orchestrator | 09:37:06.846 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.846548 | orchestrator | 09:37:06.846 STDOUT terraform:  } 2025-10-09 09:37:06.846573 | orchestrator | 09:37:06.846 STDOUT terraform:  + network { 2025-10-09 09:37:06.846583 | orchestrator | 09:37:06.846 STDOUT terraform:  + access_network = false 2025-10-09 09:37:06.846625 | orchestrator | 09:37:06.846 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:06.846733 | orchestrator | 09:37:06.846 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:06.846785 | orchestrator | 09:37:06.846 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:06.847390 | orchestrator | 09:37:06.846 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.847493 | orchestrator | 09:37:06.847 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:06.847660 | orchestrator | 09:37:06.847 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.847682 | orchestrator | 09:37:06.847 STDOUT terraform:  } 2025-10-09 09:37:06.847717 | orchestrator | 09:37:06.847 STDOUT terraform:  } 2025-10-09 09:37:06.847829 | orchestrator | 09:37:06.847 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-10-09 09:37:06.847938 | orchestrator | 09:37:06.847 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:06.848063 | orchestrator | 09:37:06.847 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:06.848189 | orchestrator | 09:37:06.848 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:06.848437 | orchestrator | 09:37:06.848 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:06.848464 | orchestrator | 09:37:06.848 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.852044 | orchestrator | 09:37:06.848 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.852105 | orchestrator | 09:37:06.848 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:06.852231 | orchestrator | 09:37:06.848 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:06.852531 | orchestrator | 09:37:06.852 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:06.852705 | orchestrator | 09:37:06.852 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:06.852726 | orchestrator | 09:37:06.852 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:06.852896 | orchestrator | 09:37:06.852 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:06.853256 | orchestrator | 09:37:06.852 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.853350 | orchestrator | 09:37:06.853 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.853429 | orchestrator | 09:37:06.853 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:06.853628 | orchestrator | 09:37:06.853 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:06.853804 | orchestrator | 09:37:06.853 STDOUT terraform:  + name = "testbed-node-1" 2025-10-09 09:37:06.853841 | orchestrator | 09:37:06.853 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:06.853965 | orchestrator | 09:37:06.853 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.854045 | orchestrator | 09:37:06.853 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:06.854125 | orchestrator | 09:37:06.854 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:06.854250 | orchestrator | 09:37:06.854 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:06.854433 | orchestrator | 09:37:06.854 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:06.854520 | orchestrator | 09:37:06.854 STDOUT terraform:  + block_device { 2025-10-09 09:37:06.854678 | orchestrator | 09:37:06.854 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:06.854891 | orchestrator | 09:37:06.854 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:06.854899 | orchestrator | 09:37:06.854 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:06.855029 | orchestrator | 09:37:06.854 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:06.855150 | orchestrator | 09:37:06.855 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:06.855189 | orchestrator | 09:37:06.855 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.855216 | orchestrator | 09:37:06.855 STDOUT terraform:  } 2025-10-09 09:37:06.855288 | orchestrator | 09:37:06.855 STDOUT terraform:  + network { 2025-10-09 09:37:06.855526 | orchestrator | 09:37:06.855 STDOUT terraform:  + access_network = false 2025-10-09 09:37:06.855714 | orchestrator | 09:37:06.855 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:06.855841 | orchestrator | 09:37:06.855 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:06.856022 | orchestrator | 09:37:06.855 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:06.856203 | orchestrator | 09:37:06.856 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.856542 | orchestrator | 09:37:06.856 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:06.856623 | orchestrator | 09:37:06.856 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.856632 | orchestrator | 09:37:06.856 STDOUT terraform:  } 2025-10-09 09:37:06.856640 | orchestrator | 09:37:06.856 STDOUT terraform:  } 2025-10-09 09:37:06.856646 | orchestrator | 09:37:06.856 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-10-09 09:37:06.856651 | orchestrator | 09:37:06.856 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:06.856657 | orchestrator | 09:37:06.856 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:06.856670 | orchestrator | 09:37:06.856 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:06.856675 | orchestrator | 09:37:06.856 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:06.856681 | orchestrator | 09:37:06.856 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.856685 | orchestrator | 09:37:06.856 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.856691 | orchestrator | 09:37:06.856 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:06.856695 | orchestrator | 09:37:06.856 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:06.856701 | orchestrator | 09:37:06.856 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:06.856706 | orchestrator | 09:37:06.856 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:06.856723 | orchestrator | 09:37:06.856 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:06.856729 | orchestrator | 09:37:06.856 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:06.856757 | orchestrator | 09:37:06.856 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.856795 | orchestrator | 09:37:06.856 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.856828 | orchestrator | 09:37:06.856 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:06.856852 | orchestrator | 09:37:06.856 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:06.856902 | orchestrator | 09:37:06.856 STDOUT terraform:  + name = "testbed-node-2" 2025-10-09 09:37:06.856911 | orchestrator | 09:37:06.856 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:06.856951 | orchestrator | 09:37:06.856 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.856984 | orchestrator | 09:37:06.856 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:06.857023 | orchestrator | 09:37:06.856 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:06.857054 | orchestrator | 09:37:06.857 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:06.857108 | orchestrator | 09:37:06.857 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:06.857120 | orchestrator | 09:37:06.857 STDOUT terraform:  + block_device { 2025-10-09 09:37:06.857142 | orchestrator | 09:37:06.857 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:06.857169 | orchestrator | 09:37:06.857 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:06.857198 | orchestrator | 09:37:06.857 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:06.857224 | orchestrator | 09:37:06.857 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:06.857252 | orchestrator | 09:37:06.857 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:06.857293 | orchestrator | 09:37:06.857 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.857306 | orchestrator | 09:37:06.857 STDOUT terraform:  } 2025-10-09 09:37:06.857315 | orchestrator | 09:37:06.857 STDOUT terraform:  + network { 2025-10-09 09:37:06.857338 | orchestrator | 09:37:06.857 STDOUT terraform:  + access_network = false 2025-10-09 09:37:06.857351 | orchestrator | 09:37:06.857 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:06.857388 | orchestrator | 09:37:06.857 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:06.857416 | orchestrator | 09:37:06.857 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:06.857446 | orchestrator | 09:37:06.857 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.857476 | orchestrator | 09:37:06.857 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:06.857507 | orchestrator | 09:37:06.857 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.857515 | orchestrator | 09:37:06.857 STDOUT terraform:  } 2025-10-09 09:37:06.857532 | orchestrator | 09:37:06.857 STDOUT terraform:  } 2025-10-09 09:37:06.857571 | orchestrator | 09:37:06.857 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-10-09 09:37:06.857616 | orchestrator | 09:37:06.857 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:06.857649 | orchestrator | 09:37:06.857 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:06.857682 | orchestrator | 09:37:06.857 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:06.857721 | orchestrator | 09:37:06.857 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:06.857762 | orchestrator | 09:37:06.857 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.857770 | orchestrator | 09:37:06.857 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.857793 | orchestrator | 09:37:06.857 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:06.857826 | orchestrator | 09:37:06.857 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:06.857861 | orchestrator | 09:37:06.857 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:06.857899 | orchestrator | 09:37:06.857 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:06.857907 | orchestrator | 09:37:06.857 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:06.857945 | orchestrator | 09:37:06.857 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:06.857978 | orchestrator | 09:37:06.857 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.858038 | orchestrator | 09:37:06.857 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.858086 | orchestrator | 09:37:06.858 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:06.858097 | orchestrator | 09:37:06.858 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:06.858130 | orchestrator | 09:37:06.858 STDOUT terraform:  + name = "testbed-node-3" 2025-10-09 09:37:06.858140 | orchestrator | 09:37:06.858 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:06.858182 | orchestrator | 09:37:06.858 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.858215 | orchestrator | 09:37:06.858 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:06.858241 | orchestrator | 09:37:06.858 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:06.858334 | orchestrator | 09:37:06.858 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:06.858345 | orchestrator | 09:37:06.858 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:06.858355 | orchestrator | 09:37:06.858 STDOUT terraform:  + block_device { 2025-10-09 09:37:06.858364 | orchestrator | 09:37:06.858 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:06.858397 | orchestrator | 09:37:06.858 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:06.858426 | orchestrator | 09:37:06.858 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:06.858454 | orchestrator | 09:37:06.858 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:06.858474 | orchestrator | 09:37:06.858 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:06.858520 | orchestrator | 09:37:06.858 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.858531 | orchestrator | 09:37:06.858 STDOUT terraform:  } 2025-10-09 09:37:06.858538 | orchestrator | 09:37:06.858 STDOUT terraform:  + network { 2025-10-09 09:37:06.858547 | orchestrator | 09:37:06.858 STDOUT terraform:  + access_network = false 2025-10-09 09:37:06.858588 | orchestrator | 09:37:06.858 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:06.858611 | orchestrator | 09:37:06.858 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:06.858642 | orchestrator | 09:37:06.858 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:06.858677 | orchestrator | 09:37:06.858 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.858705 | orchestrator | 09:37:06.858 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:06.858735 | orchestrator | 09:37:06.858 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.858745 | orchestrator | 09:37:06.858 STDOUT terraform:  } 2025-10-09 09:37:06.858754 | orchestrator | 09:37:06.858 STDOUT terraform:  } 2025-10-09 09:37:06.858797 | orchestrator | 09:37:06.858 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-10-09 09:37:06.858845 | orchestrator | 09:37:06.858 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:06.858876 | orchestrator | 09:37:06.858 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:06.858910 | orchestrator | 09:37:06.858 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:06.858946 | orchestrator | 09:37:06.858 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:06.858991 | orchestrator | 09:37:06.858 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.859001 | orchestrator | 09:37:06.858 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.859030 | orchestrator | 09:37:06.858 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:06.859069 | orchestrator | 09:37:06.859 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:06.859103 | orchestrator | 09:37:06.859 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:06.859129 | orchestrator | 09:37:06.859 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:06.859152 | orchestrator | 09:37:06.859 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:06.859185 | orchestrator | 09:37:06.859 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:06.859221 | orchestrator | 09:37:06.859 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.859260 | orchestrator | 09:37:06.859 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.859291 | orchestrator | 09:37:06.859 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:06.859316 | orchestrator | 09:37:06.859 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:06.859349 | orchestrator | 09:37:06.859 STDOUT terraform:  + name = "testbed-node-4" 2025-10-09 09:37:06.859374 | orchestrator | 09:37:06.859 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:06.859412 | orchestrator | 09:37:06.859 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.859448 | orchestrator | 09:37:06.859 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:06.859470 | orchestrator | 09:37:06.859 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:06.859504 | orchestrator | 09:37:06.859 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:06.859553 | orchestrator | 09:37:06.859 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:06.859563 | orchestrator | 09:37:06.859 STDOUT terraform:  + block_device { 2025-10-09 09:37:06.859591 | orchestrator | 09:37:06.859 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:06.859618 | orchestrator | 09:37:06.859 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:06.859663 | orchestrator | 09:37:06.859 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:06.859678 | orchestrator | 09:37:06.859 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:06.859708 | orchestrator | 09:37:06.859 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:06.859742 | orchestrator | 09:37:06.859 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.859753 | orchestrator | 09:37:06.859 STDOUT terraform:  } 2025-10-09 09:37:06.859759 | orchestrator | 09:37:06.859 STDOUT terraform:  + network { 2025-10-09 09:37:06.859768 | orchestrator | 09:37:06.859 STDOUT terraform:  + access_network = false 2025-10-09 09:37:06.859804 | orchestrator | 09:37:06.859 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:06.859835 | orchestrator | 09:37:06.859 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:06.859866 | orchestrator | 09:37:06.859 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:06.859887 | orchestrator | 09:37:06.859 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.859922 | orchestrator | 09:37:06.859 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:06.859954 | orchestrator | 09:37:06.859 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.859964 | orchestrator | 09:37:06.859 STDOUT terraform:  } 2025-10-09 09:37:06.859972 | orchestrator | 09:37:06.859 STDOUT terraform:  } 2025-10-09 09:37:06.860033 | orchestrator | 09:37:06.859 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-10-09 09:37:06.860079 | orchestrator | 09:37:06.860 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:06.860116 | orchestrator | 09:37:06.860 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:06.860149 | orchestrator | 09:37:06.860 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:06.860180 | orchestrator | 09:37:06.860 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:06.860268 | orchestrator | 09:37:06.860 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.860287 | orchestrator | 09:37:06.860 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:06.860295 | orchestrator | 09:37:06.860 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:06.860303 | orchestrator | 09:37:06.860 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:06.860310 | orchestrator | 09:37:06.860 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:06.860338 | orchestrator | 09:37:06.860 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:06.860364 | orchestrator | 09:37:06.860 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:06.860395 | orchestrator | 09:37:06.860 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:06.860430 | orchestrator | 09:37:06.860 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.860465 | orchestrator | 09:37:06.860 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:06.860499 | orchestrator | 09:37:06.860 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:06.860521 | orchestrator | 09:37:06.860 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:06.860549 | orchestrator | 09:37:06.860 STDOUT terraform:  + name = "testbed-node-5" 2025-10-09 09:37:06.860575 | orchestrator | 09:37:06.860 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:06.860609 | orchestrator | 09:37:06.860 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.860642 | orchestrator | 09:37:06.860 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:06.860665 | orchestrator | 09:37:06.860 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:06.860703 | orchestrator | 09:37:06.860 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:06.860756 | orchestrator | 09:37:06.860 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:06.860767 | orchestrator | 09:37:06.860 STDOUT terraform:  + block_device { 2025-10-09 09:37:06.860790 | orchestrator | 09:37:06.860 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:06.860816 | orchestrator | 09:37:06.860 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:06.860846 | orchestrator | 09:37:06.860 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:06.860873 | orchestrator | 09:37:06.860 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:06.860915 | orchestrator | 09:37:06.860 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:06.860945 | orchestrator | 09:37:06.860 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.860953 | orchestrator | 09:37:06.860 STDOUT terraform:  } 2025-10-09 09:37:06.860961 | orchestrator | 09:37:06.860 STDOUT terraform:  + network { 2025-10-09 09:37:06.860970 | orchestrator | 09:37:06.860 STDOUT terraform:  + access_network = false 2025-10-09 09:37:06.861042 | orchestrator | 09:37:06.860 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:06.861055 | orchestrator | 09:37:06.860 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:06.861069 | orchestrator | 09:37:06.861 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:06.861100 | orchestrator | 09:37:06.861 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:06.861142 | orchestrator | 09:37:06.861 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:06.861153 | orchestrator | 09:37:06.861 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:06.861162 | orchestrator | 09:37:06.861 STDOUT terraform:  } 2025-10-09 09:37:06.861182 | orchestrator | 09:37:06.861 STDOUT terraform:  } 2025-10-09 09:37:06.861218 | orchestrator | 09:37:06.861 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-10-09 09:37:06.861254 | orchestrator | 09:37:06.861 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-10-09 09:37:06.861327 | orchestrator | 09:37:06.861 STDOUT terraform:  + fingerprint = (known after apply) 2025-10-09 09:37:06.861518 | orchestrator | 09:37:06.861 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.861592 | orchestrator | 09:37:06.861 STDOUT terraform:  + name = "testbed" 2025-10-09 09:37:06.861648 | orchestrator | 09:37:06.861 STDOUT terraform:  + private_key = (sensitive value) 2025-10-09 09:37:06.861736 | orchestrator | 09:37:06.861 STDOUT terraform:  + public_key = (known after apply) 2025-10-09 09:37:06.861854 | orchestrator | 09:37:06.861 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.862037 | orchestrator | 09:37:06.861 STDOUT terraform:  + user_id = (known after apply) 2025-10-09 09:37:06.862283 | orchestrator | 09:37:06.862 STDOUT terraform:  } 2025-10-09 09:37:06.862421 | orchestrator | 09:37:06.862 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-10-09 09:37:06.862535 | orchestrator | 09:37:06.862 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.862636 | orchestrator | 09:37:06.862 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.862722 | orchestrator | 09:37:06.862 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.862894 | orchestrator | 09:37:06.862 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.862952 | orchestrator | 09:37:06.862 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.863098 | orchestrator | 09:37:06.862 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.863181 | orchestrator | 09:37:06.863 STDOUT terraform:  } 2025-10-09 09:37:06.863419 | orchestrator | 09:37:06.863 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-10-09 09:37:06.863514 | orchestrator | 09:37:06.863 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.863565 | orchestrator | 09:37:06.863 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.863754 | orchestrator | 09:37:06.863 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.863871 | orchestrator | 09:37:06.863 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.863962 | orchestrator | 09:37:06.863 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.864037 | orchestrator | 09:37:06.863 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.864190 | orchestrator | 09:37:06.864 STDOUT terraform:  } 2025-10-09 09:37:06.864420 | orchestrator | 09:37:06.864 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-10-09 09:37:06.864707 | orchestrator | 09:37:06.864 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.864817 | orchestrator | 09:37:06.864 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.864897 | orchestrator | 09:37:06.864 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.864960 | orchestrator | 09:37:06.864 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.865021 | orchestrator | 09:37:06.864 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.865068 | orchestrator | 09:37:06.865 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.865114 | orchestrator | 09:37:06.865 STDOUT terraform:  } 2025-10-09 09:37:06.865338 | orchestrator | 09:37:06.865 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-10-09 09:37:06.865491 | orchestrator | 09:37:06.865 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.865607 | orchestrator | 09:37:06.865 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.865668 | orchestrator | 09:37:06.865 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.865842 | orchestrator | 09:37:06.865 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.865973 | orchestrator | 09:37:06.865 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.866209 | orchestrator | 09:37:06.865 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.866218 | orchestrator | 09:37:06.866 STDOUT terraform:  } 2025-10-09 09:37:06.866411 | orchestrator | 09:37:06.866 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-10-09 09:37:06.866578 | orchestrator | 09:37:06.866 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.866750 | orchestrator | 09:37:06.866 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.866841 | orchestrator | 09:37:06.866 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.866938 | orchestrator | 09:37:06.866 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.867070 | orchestrator | 09:37:06.866 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.867155 | orchestrator | 09:37:06.867 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.867229 | orchestrator | 09:37:06.867 STDOUT terraform:  } 2025-10-09 09:37:06.867341 | orchestrator | 09:37:06.867 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-10-09 09:37:06.867484 | orchestrator | 09:37:06.867 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.867649 | orchestrator | 09:37:06.867 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.867770 | orchestrator | 09:37:06.867 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.867884 | orchestrator | 09:37:06.867 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.867945 | orchestrator | 09:37:06.867 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.868018 | orchestrator | 09:37:06.867 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.868058 | orchestrator | 09:37:06.868 STDOUT terraform:  } 2025-10-09 09:37:06.868240 | orchestrator | 09:37:06.868 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-10-09 09:37:06.868495 | orchestrator | 09:37:06.868 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.868641 | orchestrator | 09:37:06.868 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.868724 | orchestrator | 09:37:06.868 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.868812 | orchestrator | 09:37:06.868 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.868871 | orchestrator | 09:37:06.868 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.868973 | orchestrator | 09:37:06.868 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.869156 | orchestrator | 09:37:06.869 STDOUT terraform:  } 2025-10-09 09:37:06.869421 | orchestrator | 09:37:06.869 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-10-09 09:37:06.869556 | orchestrator | 09:37:06.869 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.869623 | orchestrator | 09:37:06.869 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.869693 | orchestrator | 09:37:06.869 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.869836 | orchestrator | 09:37:06.869 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.870061 | orchestrator | 09:37:06.869 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.870243 | orchestrator | 09:37:06.870 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.870278 | orchestrator | 09:37:06.870 STDOUT terraform:  } 2025-10-09 09:37:06.870500 | orchestrator | 09:37:06.870 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-10-09 09:37:06.870651 | orchestrator | 09:37:06.870 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:06.870738 | orchestrator | 09:37:06.870 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:06.870929 | orchestrator | 09:37:06.870 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.871002 | orchestrator | 09:37:06.870 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:06.871103 | orchestrator | 09:37:06.870 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.871178 | orchestrator | 09:37:06.871 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:06.871230 | orchestrator | 09:37:06.871 STDOUT terraform:  } 2025-10-09 09:37:06.871437 | orchestrator | 09:37:06.871 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-10-09 09:37:06.871637 | orchestrator | 09:37:06.871 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-10-09 09:37:06.871916 | orchestrator | 09:37:06.871 STDOUT terraform:  + fixed_ip = (known after apply) 2025-10-09 09:37:06.872030 | orchestrator | 09:37:06.871 STDOUT terraform:  + floating_ip = (known after apply) 2025-10-09 09:37:06.872084 | orchestrator | 09:37:06.872 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.872210 | orchestrator | 09:37:06.872 STDOUT terraform:  + port_id = (known after apply) 2025-10-09 09:37:06.872308 | orchestrator | 09:37:06.872 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.872322 | orchestrator | 09:37:06.872 STDOUT terraform:  } 2025-10-09 09:37:06.872404 | orchestrator | 09:37:06.872 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-10-09 09:37:06.872452 | orchestrator | 09:37:06.872 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-10-09 09:37:06.872463 | orchestrator | 09:37:06.872 STDOUT terraform:  + address = (known after apply) 2025-10-09 09:37:06.872498 | orchestrator | 09:37:06.872 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.872509 | orchestrator | 09:37:06.872 STDOUT terraform:  + dns_domain = (known after apply) 2025-10-09 09:37:06.872543 | orchestrator | 09:37:06.872 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:06.872553 | orchestrator | 09:37:06.872 STDOUT terraform:  + fixed_ip = (known after apply) 2025-10-09 09:37:06.872588 | orchestrator | 09:37:06.872 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.872606 | orchestrator | 09:37:06.872 STDOUT terraform:  + pool = "public" 2025-10-09 09:37:06.872617 | orchestrator | 09:37:06.872 STDOUT terraform:  + port_id = (known after apply) 2025-10-09 09:37:06.872644 | orchestrator | 09:37:06.872 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.872654 | orchestrator | 09:37:06.872 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.872692 | orchestrator | 09:37:06.872 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.872702 | orchestrator | 09:37:06.872 STDOUT terraform:  } 2025-10-09 09:37:06.872742 | orchestrator | 09:37:06.872 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-10-09 09:37:06.872814 | orchestrator | 09:37:06.872 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-10-09 09:37:06.872882 | orchestrator | 09:37:06.872 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.873097 | orchestrator | 09:37:06.872 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.873171 | orchestrator | 09:37:06.873 STDOUT terraform:  + availability_zone_hints = [ 2025-10-09 09:37:06.873179 | orchestrator | 09:37:06.873 STDOUT terraform:  + "nova", 2025-10-09 09:37:06.873213 | orchestrator | 09:37:06.873 STDOUT terraform:  ] 2025-10-09 09:37:06.873274 | orchestrator | 09:37:06.873 STDOUT terraform:  + dns_domain = (known after apply) 2025-10-09 09:37:06.873333 | orchestrator | 09:37:06.873 STDOUT terraform:  + external = (known after apply) 2025-10-09 09:37:06.873464 | orchestrator | 09:37:06.873 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.873595 | orchestrator | 09:37:06.873 STDOUT terraform:  + mtu = (known after apply) 2025-10-09 09:37:06.873864 | orchestrator | 09:37:06.873 STDOUT terraform:  + name = "net-testbed-management" 2025-10-09 09:37:06.874003 | orchestrator | 09:37:06.873 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:06.874605 | orchestrator | 09:37:06.874 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:06.874743 | orchestrator | 09:37:06.874 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.874912 | orchestrator | 09:37:06.874 STDOUT terraform:  + shared = (known after apply) 2025-10-09 09:37:06.875128 | orchestrator | 09:37:06.874 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.875313 | orchestrator | 09:37:06.875 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-10-09 09:37:06.875366 | orchestrator | 09:37:06.875 STDOUT terraform:  + segments (known after apply) 2025-10-09 09:37:06.875448 | orchestrator | 09:37:06.875 STDOUT terraform:  } 2025-10-09 09:37:06.875593 | orchestrator | 09:37:06.875 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-10-09 09:37:06.875799 | orchestrator | 09:37:06.875 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-10-09 09:37:06.875981 | orchestrator | 09:37:06.875 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.876101 | orchestrator | 09:37:06.875 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:06.876211 | orchestrator | 09:37:06.876 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:06.876366 | orchestrator | 09:37:06.876 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.876493 | orchestrator | 09:37:06.876 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:06.876735 | orchestrator | 09:37:06.876 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:06.876946 | orchestrator | 09:37:06.876 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:06.877128 | orchestrator | 09:37:06.876 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:06.877205 | orchestrator | 09:37:06.877 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.877473 | orchestrator | 09:37:06.877 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:06.877581 | orchestrator | 09:37:06.877 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:06.877726 | orchestrator | 09:37:06.877 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:06.877920 | orchestrator | 09:37:06.877 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:06.878085 | orchestrator | 09:37:06.877 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.878406 | orchestrator | 09:37:06.878 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:06.878565 | orchestrator | 09:37:06.878 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.878662 | orchestrator | 09:37:06.878 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.878733 | orchestrator | 09:37:06.878 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:06.878773 | orchestrator | 09:37:06.878 STDOUT terraform:  } 2025-10-09 09:37:06.878846 | orchestrator | 09:37:06.878 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:06.878909 | orchestrator | 09:37:06.878 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:06.878989 | orchestrator | 09:37:06.878 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-10-09 09:37:06.886156 | orchestrator | 09:37:06.878 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.886249 | orchestrator | 09:37:06.886 STDOUT terraform:  } 2025-10-09 09:37:06.886283 | orchestrator | 09:37:06.886 STDOUT terraform:  } 2025-10-09 09:37:06.886344 | orchestrator | 09:37:06.886 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-10-09 09:37:06.886411 | orchestrator | 09:37:06.886 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:06.886459 | orchestrator | 09:37:06.886 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.886504 | orchestrator | 09:37:06.886 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:06.886549 | orchestrator | 09:37:06.886 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:06.886596 | orchestrator | 09:37:06.886 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.886639 | orchestrator | 09:37:06.886 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:06.886683 | orchestrator | 09:37:06.886 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:06.886726 | orchestrator | 09:37:06.886 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:06.886773 | orchestrator | 09:37:06.886 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:06.886817 | orchestrator | 09:37:06.886 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.886860 | orchestrator | 09:37:06.886 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:06.886902 | orchestrator | 09:37:06.886 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:06.886944 | orchestrator | 09:37:06.886 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:06.886986 | orchestrator | 09:37:06.886 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:06.887046 | orchestrator | 09:37:06.886 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.887091 | orchestrator | 09:37:06.887 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:06.887135 | orchestrator | 09:37:06.887 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.887162 | orchestrator | 09:37:06.887 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.887208 | orchestrator | 09:37:06.887 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:06.887231 | orchestrator | 09:37:06.887 STDOUT terraform:  } 2025-10-09 09:37:06.887259 | orchestrator | 09:37:06.887 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.887295 | orchestrator | 09:37:06.887 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:06.887316 | orchestrator | 09:37:06.887 STDOUT terraform:  } 2025-10-09 09:37:06.887342 | orchestrator | 09:37:06.887 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.887378 | orchestrator | 09:37:06.887 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:06.887399 | orchestrator | 09:37:06.887 STDOUT terraform:  } 2025-10-09 09:37:06.887429 | orchestrator | 09:37:06.887 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:06.887450 | orchestrator | 09:37:06.887 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:06.887483 | orchestrator | 09:37:06.887 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-10-09 09:37:06.887519 | orchestrator | 09:37:06.887 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.887540 | orchestrator | 09:37:06.887 STDOUT terraform:  } 2025-10-09 09:37:06.887560 | orchestrator | 09:37:06.887 STDOUT terraform:  } 2025-10-09 09:37:06.887613 | orchestrator | 09:37:06.887 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-10-09 09:37:06.887664 | orchestrator | 09:37:06.887 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:06.887711 | orchestrator | 09:37:06.887 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.887754 | orchestrator | 09:37:06.887 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:06.887796 | orchestrator | 09:37:06.887 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:06.887838 | orchestrator | 09:37:06.887 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.887885 | orchestrator | 09:37:06.887 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:06.887927 | orchestrator | 09:37:06.887 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:06.887969 | orchestrator | 09:37:06.887 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:06.888023 | orchestrator | 09:37:06.887 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:06.888069 | orchestrator | 09:37:06.888 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.888113 | orchestrator | 09:37:06.888 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:06.888156 | orchestrator | 09:37:06.888 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:06.888197 | orchestrator | 09:37:06.888 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:06.888239 | orchestrator | 09:37:06.888 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:06.888282 | orchestrator | 09:37:06.888 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.888330 | orchestrator | 09:37:06.888 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:06.888374 | orchestrator | 09:37:06.888 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.888402 | orchestrator | 09:37:06.888 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.888439 | orchestrator | 09:37:06.888 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:06.888460 | orchestrator | 09:37:06.888 STDOUT terraform:  } 2025-10-09 09:37:06.888497 | orchestrator | 09:37:06.888 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.888535 | orchestrator | 09:37:06.888 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:06.888556 | orchestrator | 09:37:06.888 STDOUT terraform:  } 2025-10-09 09:37:06.888583 | orchestrator | 09:37:06.888 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.888619 | orchestrator | 09:37:06.888 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:06.888640 | orchestrator | 09:37:06.888 STDOUT terraform:  } 2025-10-09 09:37:06.888672 | orchestrator | 09:37:06.888 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:06.888693 | orchestrator | 09:37:06.888 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:06.888726 | orchestrator | 09:37:06.888 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-10-09 09:37:06.888763 | orchestrator | 09:37:06.888 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.888784 | orchestrator | 09:37:06.888 STDOUT terraform:  } 2025-10-09 09:37:06.888816 | orchestrator | 09:37:06.888 STDOUT terraform:  } 2025-10-09 09:37:06.888871 | orchestrator | 09:37:06.888 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-10-09 09:37:06.888924 | orchestrator | 09:37:06.888 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:06.888967 | orchestrator | 09:37:06.888 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.889024 | orchestrator | 09:37:06.888 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:06.889066 | orchestrator | 09:37:06.889 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:06.889109 | orchestrator | 09:37:06.889 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.889152 | orchestrator | 09:37:06.889 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:06.889196 | orchestrator | 09:37:06.889 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:06.889238 | orchestrator | 09:37:06.889 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:06.889280 | orchestrator | 09:37:06.889 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:06.889322 | orchestrator | 09:37:06.889 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.889372 | orchestrator | 09:37:06.889 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:06.889416 | orchestrator | 09:37:06.889 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:06.889464 | orchestrator | 09:37:06.889 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:06.889508 | orchestrator | 09:37:06.889 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:06.889553 | orchestrator | 09:37:06.889 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.889595 | orchestrator | 09:37:06.889 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:06.889638 | orchestrator | 09:37:06.889 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.889665 | orchestrator | 09:37:06.889 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.889700 | orchestrator | 09:37:06.889 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:06.889721 | orchestrator | 09:37:06.889 STDOUT terraform:  } 2025-10-09 09:37:06.889747 | orchestrator | 09:37:06.889 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.889782 | orchestrator | 09:37:06.889 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:06.889810 | orchestrator | 09:37:06.889 STDOUT terraform:  } 2025-10-09 09:37:06.889837 | orchestrator | 09:37:06.889 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.889873 | orchestrator | 09:37:06.889 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:06.889895 | orchestrator | 09:37:06.889 STDOUT terraform:  } 2025-10-09 09:37:06.889935 | orchestrator | 09:37:06.889 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:06.889959 | orchestrator | 09:37:06.889 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:06.889990 | orchestrator | 09:37:06.889 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-10-09 09:37:06.890071 | orchestrator | 09:37:06.889 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.890097 | orchestrator | 09:37:06.890 STDOUT terraform:  } 2025-10-09 09:37:06.890118 | orchestrator | 09:37:06.890 STDOUT terraform:  } 2025-10-09 09:37:06.890173 | orchestrator | 09:37:06.890 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-10-09 09:37:06.890225 | orchestrator | 09:37:06.890 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:06.890269 | orchestrator | 09:37:06.890 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.890313 | orchestrator | 09:37:06.890 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:06.890358 | orchestrator | 09:37:06.890 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:06.890402 | orchestrator | 09:37:06.890 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.890445 | orchestrator | 09:37:06.890 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:06.890488 | orchestrator | 09:37:06.890 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:06.890539 | orchestrator | 09:37:06.890 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:06.890583 | orchestrator | 09:37:06.890 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:06.890627 | orchestrator | 09:37:06.890 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.890678 | orchestrator | 09:37:06.890 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:06.890720 | orchestrator | 09:37:06.890 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:06.890762 | orchestrator | 09:37:06.890 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:06.890786 | orchestrator | 09:37:06.890 STDOUT terraform:  + qos_policy_i 2025-10-09 09:37:06.890899 | orchestrator | 09:37:06.890 STDOUT terraform: d = (known after apply) 2025-10-09 09:37:06.890944 | orchestrator | 09:37:06.890 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.890988 | orchestrator | 09:37:06.890 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:06.891043 | orchestrator | 09:37:06.890 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.891070 | orchestrator | 09:37:06.891 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.891105 | orchestrator | 09:37:06.891 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:06.891126 | orchestrator | 09:37:06.891 STDOUT terraform:  } 2025-10-09 09:37:06.891154 | orchestrator | 09:37:06.891 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.891191 | orchestrator | 09:37:06.891 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:06.891212 | orchestrator | 09:37:06.891 STDOUT terraform:  } 2025-10-09 09:37:06.891237 | orchestrator | 09:37:06.891 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.891271 | orchestrator | 09:37:06.891 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:06.891293 | orchestrator | 09:37:06.891 STDOUT terraform:  } 2025-10-09 09:37:06.891323 | orchestrator | 09:37:06.891 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:06.891344 | orchestrator | 09:37:06.891 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:06.891376 | orchestrator | 09:37:06.891 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-10-09 09:37:06.891411 | orchestrator | 09:37:06.891 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.891432 | orchestrator | 09:37:06.891 STDOUT terraform:  } 2025-10-09 09:37:06.891452 | orchestrator | 09:37:06.891 STDOUT terraform:  } 2025-10-09 09:37:06.891504 | orchestrator | 09:37:06.891 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-10-09 09:37:06.891555 | orchestrator | 09:37:06.891 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:06.891600 | orchestrator | 09:37:06.891 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.891642 | orchestrator | 09:37:06.891 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:06.891685 | orchestrator | 09:37:06.891 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:06.891727 | orchestrator | 09:37:06.891 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.891770 | orchestrator | 09:37:06.891 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:06.891811 | orchestrator | 09:37:06.891 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:06.891857 | orchestrator | 09:37:06.891 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:06.891901 | orchestrator | 09:37:06.891 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:06.891943 | orchestrator | 09:37:06.891 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.891986 | orchestrator | 09:37:06.891 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:06.892062 | orchestrator | 09:37:06.891 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:06.892107 | orchestrator | 09:37:06.892 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:06.892157 | orchestrator | 09:37:06.892 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:06.892201 | orchestrator | 09:37:06.892 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.892243 | orchestrator | 09:37:06.892 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:06.892288 | orchestrator | 09:37:06.892 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.892318 | orchestrator | 09:37:06.892 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.892354 | orchestrator | 09:37:06.892 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:06.892376 | orchestrator | 09:37:06.892 STDOUT terraform:  } 2025-10-09 09:37:06.892407 | orchestrator | 09:37:06.892 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.892443 | orchestrator | 09:37:06.892 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:06.892463 | orchestrator | 09:37:06.892 STDOUT terraform:  } 2025-10-09 09:37:06.892489 | orchestrator | 09:37:06.892 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.892525 | orchestrator | 09:37:06.892 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:06.892545 | orchestrator | 09:37:06.892 STDOUT terraform:  } 2025-10-09 09:37:06.892575 | orchestrator | 09:37:06.892 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:06.892597 | orchestrator | 09:37:06.892 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:06.892628 | orchestrator | 09:37:06.892 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-10-09 09:37:06.892663 | orchestrator | 09:37:06.892 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.892684 | orchestrator | 09:37:06.892 STDOUT terraform:  } 2025-10-09 09:37:06.892703 | orchestrator | 09:37:06.892 STDOUT terraform:  } 2025-10-09 09:37:06.892756 | orchestrator | 09:37:06.892 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-10-09 09:37:06.892807 | orchestrator | 09:37:06.892 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:06.892850 | orchestrator | 09:37:06.892 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.892893 | orchestrator | 09:37:06.892 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:06.892936 | orchestrator | 09:37:06.892 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:06.892983 | orchestrator | 09:37:06.892 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.893037 | orchestrator | 09:37:06.892 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:06.893079 | orchestrator | 09:37:06.893 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:06.893120 | orchestrator | 09:37:06.893 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:06.893163 | orchestrator | 09:37:06.893 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:06.893205 | orchestrator | 09:37:06.893 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.893248 | orchestrator | 09:37:06.893 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:06.893290 | orchestrator | 09:37:06.893 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:06.893332 | orchestrator | 09:37:06.893 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:06.893375 | orchestrator | 09:37:06.893 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:06.893418 | orchestrator | 09:37:06.893 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.893459 | orchestrator | 09:37:06.893 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:06.893501 | orchestrator | 09:37:06.893 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.893526 | orchestrator | 09:37:06.893 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.893563 | orchestrator | 09:37:06.893 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:06.893583 | orchestrator | 09:37:06.893 STDOUT terraform:  } 2025-10-09 09:37:06.893608 | orchestrator | 09:37:06.893 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.893642 | orchestrator | 09:37:06.893 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:06.893663 | orchestrator | 09:37:06.893 STDOUT terraform:  } 2025-10-09 09:37:06.893689 | orchestrator | 09:37:06.893 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:06.893723 | orchestrator | 09:37:06.893 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:06.893744 | orchestrator | 09:37:06.893 STDOUT terraform:  } 2025-10-09 09:37:06.893775 | orchestrator | 09:37:06.893 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:06.893796 | orchestrator | 09:37:06.893 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:06.893827 | orchestrator | 09:37:06.893 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-10-09 09:37:06.893862 | orchestrator | 09:37:06.893 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.893882 | orchestrator | 09:37:06.893 STDOUT terraform:  } 2025-10-09 09:37:06.893901 | orchestrator | 09:37:06.893 STDOUT terraform:  } 2025-10-09 09:37:06.893955 | orchestrator | 09:37:06.893 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-10-09 09:37:06.894051 | orchestrator | 09:37:06.893 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-10-09 09:37:06.894094 | orchestrator | 09:37:06.894 STDOUT terraform:  + force_destroy = false 2025-10-09 09:37:06.894133 | orchestrator | 09:37:06.894 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.894176 | orchestrator | 09:37:06.894 STDOUT terraform:  + port_id = (known after apply) 2025-10-09 09:37:06.894214 | orchestrator | 09:37:06.894 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.894262 | orchestrator | 09:37:06.894 STDOUT terraform:  + router_id = (known after apply) 2025-10-09 09:37:06.894299 | orchestrator | 09:37:06.894 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:06.894319 | orchestrator | 09:37:06.894 STDOUT terraform:  } 2025-10-09 09:37:06.894363 | orchestrator | 09:37:06.894 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-10-09 09:37:06.894415 | orchestrator | 09:37:06.894 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-10-09 09:37:06.894469 | orchestrator | 09:37:06.894 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:06.894515 | orchestrator | 09:37:06.894 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.894552 | orchestrator | 09:37:06.894 STDOUT terraform:  + availability_zone_hints = [ 2025-10-09 09:37:06.894575 | orchestrator | 09:37:06.894 STDOUT terraform:  + "nova", 2025-10-09 09:37:06.894595 | orchestrator | 09:37:06.894 STDOUT terraform:  ] 2025-10-09 09:37:06.894643 | orchestrator | 09:37:06.894 STDOUT terraform:  + distributed = (known after apply) 2025-10-09 09:37:06.894687 | orchestrator | 09:37:06.894 STDOUT terraform:  + enable_snat = (known after apply) 2025-10-09 09:37:06.894743 | orchestrator | 09:37:06.894 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-10-09 09:37:06.894784 | orchestrator | 09:37:06.894 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-10-09 09:37:06.894827 | orchestrator | 09:37:06.894 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.895840 | orchestrator | 09:37:06.894 STDOUT terraform:  + name = "testbed" 2025-10-09 09:37:06.895927 | orchestrator | 09:37:06.895 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.895987 | orchestrator | 09:37:06.895 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.896137 | orchestrator | 09:37:06.895 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-10-09 09:37:06.896165 | orchestrator | 09:37:06.896 STDOUT terraform:  } 2025-10-09 09:37:06.896228 | orchestrator | 09:37:06.896 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-10-09 09:37:06.896291 | orchestrator | 09:37:06.896 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-10-09 09:37:06.896328 | orchestrator | 09:37:06.896 STDOUT terraform:  + description = "ssh" 2025-10-09 09:37:06.896365 | orchestrator | 09:37:06.896 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.896399 | orchestrator | 09:37:06.896 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.896452 | orchestrator | 09:37:06.896 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.896494 | orchestrator | 09:37:06.896 STDOUT terraform:  + port_range_max = 22 2025-10-09 09:37:06.896525 | orchestrator | 09:37:06.896 STDOUT terraform:  + port_range_min = 22 2025-10-09 09:37:06.896559 | orchestrator | 09:37:06.896 STDOUT terraform:  + protocol = "tcp" 2025-10-09 09:37:06.896603 | orchestrator | 09:37:06.896 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.896645 | orchestrator | 09:37:06.896 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.896688 | orchestrator | 09:37:06.896 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.896724 | orchestrator | 09:37:06.896 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:06.896769 | orchestrator | 09:37:06.896 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.896816 | orchestrator | 09:37:06.896 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.896838 | orchestrator | 09:37:06.896 STDOUT terraform:  } 2025-10-09 09:37:06.896897 | orchestrator | 09:37:06.896 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-10-09 09:37:06.896964 | orchestrator | 09:37:06.896 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-10-09 09:37:06.897002 | orchestrator | 09:37:06.896 STDOUT terraform:  + description = "wireguard" 2025-10-09 09:37:06.897052 | orchestrator | 09:37:06.897 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.897085 | orchestrator | 09:37:06.897 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.897130 | orchestrator | 09:37:06.897 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.897162 | orchestrator | 09:37:06.897 STDOUT terraform:  + port_range_max = 51820 2025-10-09 09:37:06.897195 | orchestrator | 09:37:06.897 STDOUT terraform:  + port_range_min = 51820 2025-10-09 09:37:06.897227 | orchestrator | 09:37:06.897 STDOUT terraform:  + protocol = "udp" 2025-10-09 09:37:06.897271 | orchestrator | 09:37:06.897 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.897313 | orchestrator | 09:37:06.897 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.897358 | orchestrator | 09:37:06.897 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.897394 | orchestrator | 09:37:06.897 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:06.897439 | orchestrator | 09:37:06.897 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.897483 | orchestrator | 09:37:06.897 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.897504 | orchestrator | 09:37:06.897 STDOUT terraform:  } 2025-10-09 09:37:06.897564 | orchestrator | 09:37:06.897 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-10-09 09:37:06.897627 | orchestrator | 09:37:06.897 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-10-09 09:37:06.897669 | orchestrator | 09:37:06.897 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.897702 | orchestrator | 09:37:06.897 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.897749 | orchestrator | 09:37:06.897 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.897788 | orchestrator | 09:37:06.897 STDOUT terraform:  + protocol = "tcp" 2025-10-09 09:37:06.897834 | orchestrator | 09:37:06.897 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.897878 | orchestrator | 09:37:06.897 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.897922 | orchestrator | 09:37:06.897 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.897964 | orchestrator | 09:37:06.897 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-10-09 09:37:06.898036 | orchestrator | 09:37:06.897 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.898086 | orchestrator | 09:37:06.898 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.898110 | orchestrator | 09:37:06.898 STDOUT terraform:  } 2025-10-09 09:37:06.898174 | orchestrator | 09:37:06.898 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-10-09 09:37:06.898237 | orchestrator | 09:37:06.898 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-10-09 09:37:06.898277 | orchestrator | 09:37:06.898 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.898310 | orchestrator | 09:37:06.898 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.898357 | orchestrator | 09:37:06.898 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.898393 | orchestrator | 09:37:06.898 STDOUT terraform:  + protocol = "udp" 2025-10-09 09:37:06.898442 | orchestrator | 09:37:06.898 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.898486 | orchestrator | 09:37:06.898 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.898531 | orchestrator | 09:37:06.898 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.898573 | orchestrator | 09:37:06.898 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-10-09 09:37:06.898618 | orchestrator | 09:37:06.898 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.898668 | orchestrator | 09:37:06.898 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.898696 | orchestrator | 09:37:06.898 STDOUT terraform:  } 2025-10-09 09:37:06.898755 | orchestrator | 09:37:06.898 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-10-09 09:37:06.898817 | orchestrator | 09:37:06.898 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-10-09 09:37:06.898853 | orchestrator | 09:37:06.898 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.898885 | orchestrator | 09:37:06.898 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.898929 | orchestrator | 09:37:06.898 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.898969 | orchestrator | 09:37:06.898 STDOUT terraform:  + protocol = "icmp" 2025-10-09 09:37:06.899060 | orchestrator | 09:37:06.898 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.899108 | orchestrator | 09:37:06.899 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.899152 | orchestrator | 09:37:06.899 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.899189 | orchestrator | 09:37:06.899 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:06.899233 | orchestrator | 09:37:06.899 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.899286 | orchestrator | 09:37:06.899 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.899309 | orchestrator | 09:37:06.899 STDOUT terraform:  } 2025-10-09 09:37:06.899367 | orchestrator | 09:37:06.899 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-10-09 09:37:06.899425 | orchestrator | 09:37:06.899 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-10-09 09:37:06.899460 | orchestrator | 09:37:06.899 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.899499 | orchestrator | 09:37:06.899 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.899544 | orchestrator | 09:37:06.899 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.899575 | orchestrator | 09:37:06.899 STDOUT terraform:  + protocol = "tcp" 2025-10-09 09:37:06.899618 | orchestrator | 09:37:06.899 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.899659 | orchestrator | 09:37:06.899 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.899701 | orchestrator | 09:37:06.899 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.899740 | orchestrator | 09:37:06.899 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:06.899782 | orchestrator | 09:37:06.899 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.899826 | orchestrator | 09:37:06.899 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.899846 | orchestrator | 09:37:06.899 STDOUT terraform:  } 2025-10-09 09:37:06.899904 | orchestrator | 09:37:06.899 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-10-09 09:37:06.899961 | orchestrator | 09:37:06.899 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-10-09 09:37:06.900004 | orchestrator | 09:37:06.899 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.900048 | orchestrator | 09:37:06.900 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.900093 | orchestrator | 09:37:06.900 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.900125 | orchestrator | 09:37:06.900 STDOUT terraform:  + protocol = "udp" 2025-10-09 09:37:06.900178 | orchestrator | 09:37:06.900 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.900235 | orchestrator | 09:37:06.900 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.900281 | orchestrator | 09:37:06.900 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.900316 | orchestrator | 09:37:06.900 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:06.900358 | orchestrator | 09:37:06.900 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.900400 | orchestrator | 09:37:06.900 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.900419 | orchestrator | 09:37:06.900 STDOUT terraform:  } 2025-10-09 09:37:06.900475 | orchestrator | 09:37:06.900 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-10-09 09:37:06.900534 | orchestrator | 09:37:06.900 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-10-09 09:37:06.900575 | orchestrator | 09:37:06.900 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.900608 | orchestrator | 09:37:06.900 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.900658 | orchestrator | 09:37:06.900 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.900696 | orchestrator | 09:37:06.900 STDOUT terraform:  + protocol = "icmp" 2025-10-09 09:37:06.900741 | orchestrator | 09:37:06.900 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.900782 | orchestrator | 09:37:06.900 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.900833 | orchestrator | 09:37:06.900 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.900870 | orchestrator | 09:37:06.900 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:06.900919 | orchestrator | 09:37:06.900 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.900963 | orchestrator | 09:37:06.900 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.900984 | orchestrator | 09:37:06.900 STDOUT terraform:  } 2025-10-09 09:37:06.901066 | orchestrator | 09:37:06.900 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-10-09 09:37:06.901133 | orchestrator | 09:37:06.901 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-10-09 09:37:06.901168 | orchestrator | 09:37:06.901 STDOUT terraform:  + description = "vrrp" 2025-10-09 09:37:06.901204 | orchestrator | 09:37:06.901 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:06.901236 | orchestrator | 09:37:06.901 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:06.901281 | orchestrator | 09:37:06.901 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.901320 | orchestrator | 09:37:06.901 STDOUT terraform:  + protocol = "112" 2025-10-09 09:37:06.901364 | orchestrator | 09:37:06.901 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.901406 | orchestrator | 09:37:06.901 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:06.901458 | orchestrator | 09:37:06.901 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:06.901497 | orchestrator | 09:37:06.901 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:06.901539 | orchestrator | 09:37:06.901 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:06.901585 | orchestrator | 09:37:06.901 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.901605 | orchestrator | 09:37:06.901 STDOUT terraform:  } 2025-10-09 09:37:06.901661 | orchestrator | 09:37:06.901 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-10-09 09:37:06.901717 | orchestrator | 09:37:06.901 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-10-09 09:37:06.901753 | orchestrator | 09:37:06.901 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.901793 | orchestrator | 09:37:06.901 STDOUT terraform:  + description = "management security group" 2025-10-09 09:37:06.901827 | orchestrator | 09:37:06.901 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.901864 | orchestrator | 09:37:06.901 STDOUT terraform:  + name = "testbed-management" 2025-10-09 09:37:06.901898 | orchestrator | 09:37:06.901 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.901932 | orchestrator | 09:37:06.901 STDOUT terraform:  + stateful = (known after apply) 2025-10-09 09:37:06.901968 | orchestrator | 09:37:06.901 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.901987 | orchestrator | 09:37:06.901 STDOUT terraform:  } 2025-10-09 09:37:06.902066 | orchestrator | 09:37:06.901 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-10-09 09:37:06.902122 | orchestrator | 09:37:06.902 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-10-09 09:37:06.902159 | orchestrator | 09:37:06.902 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.902196 | orchestrator | 09:37:06.902 STDOUT terraform:  + description = "node security group" 2025-10-09 09:37:06.902231 | orchestrator | 09:37:06.902 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.902263 | orchestrator | 09:37:06.902 STDOUT terraform:  + name = "testbed-node" 2025-10-09 09:37:06.902299 | orchestrator | 09:37:06.902 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.902336 | orchestrator | 09:37:06.902 STDOUT terraform:  + stateful = (known after apply) 2025-10-09 09:37:06.902372 | orchestrator | 09:37:06.902 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.902395 | orchestrator | 09:37:06.902 STDOUT terraform:  } 2025-10-09 09:37:06.902578 | orchestrator | 09:37:06.902 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-10-09 09:37:06.902705 | orchestrator | 09:37:06.902 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-10-09 09:37:06.902738 | orchestrator | 09:37:06.902 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:06.902771 | orchestrator | 09:37:06.902 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-10-09 09:37:06.902802 | orchestrator | 09:37:06.902 STDOUT terraform:  + dns_nameservers = [ 2025-10-09 09:37:06.902815 | orchestrator | 09:37:06.902 STDOUT terraform:  + "8.8.8.8", 2025-10-09 09:37:06.902837 | orchestrator | 09:37:06.902 STDOUT terraform:  + "9.9.9.9", 2025-10-09 09:37:06.902844 | orchestrator | 09:37:06.902 STDOUT terraform:  ] 2025-10-09 09:37:06.902868 | orchestrator | 09:37:06.902 STDOUT terraform:  + enable_dhcp = true 2025-10-09 09:37:06.902900 | orchestrator | 09:37:06.902 STDOUT terraform:  + gateway_ip = (known after apply) 2025-10-09 09:37:06.902939 | orchestrator | 09:37:06.902 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.902973 | orchestrator | 09:37:06.902 STDOUT terraform:  + ip_version = 4 2025-10-09 09:37:06.903004 | orchestrator | 09:37:06.902 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-10-09 09:37:06.903039 | orchestrator | 09:37:06.902 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-10-09 09:37:06.903104 | orchestrator | 09:37:06.903 STDOUT terraform:  + name = "subnet-testbed-management" 2025-10-09 09:37:06.903112 | orchestrator | 09:37:06.903 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:06.903135 | orchestrator | 09:37:06.903 STDOUT terraform:  + no_gateway = false 2025-10-09 09:37:06.903173 | orchestrator | 09:37:06.903 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:06.903204 | orchestrator | 09:37:06.903 STDOUT terraform:  + service_types = (known after apply) 2025-10-09 09:37:06.903254 | orchestrator | 09:37:06.903 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:06.903262 | orchestrator | 09:37:06.903 STDOUT terraform:  + allocation_pool { 2025-10-09 09:37:06.903288 | orchestrator | 09:37:06.903 STDOUT terraform:  + end = "192.168.31.250" 2025-10-09 09:37:06.903317 | orchestrator | 09:37:06.903 STDOUT terraform:  + start = "192.168.31.200" 2025-10-09 09:37:06.903325 | orchestrator | 09:37:06.903 STDOUT terraform:  } 2025-10-09 09:37:06.903331 | orchestrator | 09:37:06.903 STDOUT terraform:  } 2025-10-09 09:37:06.903367 | orchestrator | 09:37:06.903 STDOUT terraform:  # terraform_data.image will be created 2025-10-09 09:37:06.903394 | orchestrator | 09:37:06.903 STDOUT terraform:  + resource "terraform_data" "image" { 2025-10-09 09:37:06.903419 | orchestrator | 09:37:06.903 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.903441 | orchestrator | 09:37:06.903 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-10-09 09:37:06.903482 | orchestrator | 09:37:06.903 STDOUT terraform:  + output = (known after apply) 2025-10-09 09:37:06.903489 | orchestrator | 09:37:06.903 STDOUT terraform:  } 2025-10-09 09:37:06.903523 | orchestrator | 09:37:06.903 STDOUT terraform:  # terraform_data.image_node will be created 2025-10-09 09:37:06.903561 | orchestrator | 09:37:06.903 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-10-09 09:37:06.903585 | orchestrator | 09:37:06.903 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:06.903609 | orchestrator | 09:37:06.903 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-10-09 09:37:06.903632 | orchestrator | 09:37:06.903 STDOUT terraform:  + output = (known after apply) 2025-10-09 09:37:06.903639 | orchestrator | 09:37:06.903 STDOUT terraform:  } 2025-10-09 09:37:06.903681 | orchestrator | 09:37:06.903 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-10-09 09:37:06.903688 | orchestrator | 09:37:06.903 STDOUT terraform: Changes to Outputs: 2025-10-09 09:37:06.903716 | orchestrator | 09:37:06.903 STDOUT terraform:  + manager_address = (sensitive value) 2025-10-09 09:37:06.903741 | orchestrator | 09:37:06.903 STDOUT terraform:  + private_key = (sensitive value) 2025-10-09 09:37:07.074075 | orchestrator | 09:37:07.073 STDOUT terraform: terraform_data.image: Creating... 2025-10-09 09:37:07.074151 | orchestrator | 09:37:07.073 STDOUT terraform: terraform_data.image_node: Creating... 2025-10-09 09:37:07.074159 | orchestrator | 09:37:07.073 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=4aba4720-59a1-ee6c-487c-26f71b3eb2a1] 2025-10-09 09:37:07.074168 | orchestrator | 09:37:07.073 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=8f9d80f1-e704-ad49-28c5-572b37a00709] 2025-10-09 09:37:07.098816 | orchestrator | 09:37:07.098 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-10-09 09:37:07.098984 | orchestrator | 09:37:07.098 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-10-09 09:37:07.103271 | orchestrator | 09:37:07.103 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-10-09 09:37:07.103309 | orchestrator | 09:37:07.103 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-10-09 09:37:07.110763 | orchestrator | 09:37:07.110 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-10-09 09:37:07.111825 | orchestrator | 09:37:07.111 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-10-09 09:37:07.112836 | orchestrator | 09:37:07.112 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-10-09 09:37:07.113741 | orchestrator | 09:37:07.113 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-10-09 09:37:07.115616 | orchestrator | 09:37:07.115 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-10-09 09:37:07.126811 | orchestrator | 09:37:07.126 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-10-09 09:37:07.531636 | orchestrator | 09:37:07.531 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-10-09 09:37:07.536957 | orchestrator | 09:37:07.536 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-10-09 09:37:07.547665 | orchestrator | 09:37:07.547 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-10-09 09:37:07.551518 | orchestrator | 09:37:07.551 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-10-09 09:37:07.696256 | orchestrator | 09:37:07.696 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-10-09 09:37:07.701496 | orchestrator | 09:37:07.701 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-10-09 09:37:08.026098 | orchestrator | 09:37:08.019 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=368ec32d-6ba2-49d9-a006-63b14d86cd50] 2025-10-09 09:37:08.036862 | orchestrator | 09:37:08.036 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-10-09 09:37:10.698738 | orchestrator | 09:37:10.698 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=5ff7f971-6db7-40fa-8173-f07df227bea5] 2025-10-09 09:37:10.703558 | orchestrator | 09:37:10.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-10-09 09:37:10.721064 | orchestrator | 09:37:10.720 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=814c6d5d-dde1-4cc2-a34c-0e2957be6b16] 2025-10-09 09:37:10.730060 | orchestrator | 09:37:10.729 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=79b0088c-29b1-4160-91bf-334de808a256] 2025-10-09 09:37:10.732870 | orchestrator | 09:37:10.732 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=aa506c68-74fd-4bd6-80c5-fdf748fe0317] 2025-10-09 09:37:10.734520 | orchestrator | 09:37:10.734 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-10-09 09:37:10.736146 | orchestrator | 09:37:10.736 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-10-09 09:37:10.738962 | orchestrator | 09:37:10.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-10-09 09:37:10.745330 | orchestrator | 09:37:10.745 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=962e680444734e5c0b37eb8d1b46dae3ffd00aa7] 2025-10-09 09:37:10.756776 | orchestrator | 09:37:10.756 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-10-09 09:37:10.759191 | orchestrator | 09:37:10.758 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=a18ef090-7db7-4160-bb62-4e2315b7c41c] 2025-10-09 09:37:10.767682 | orchestrator | 09:37:10.767 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=14000628-bc87-4bc6-bf7c-9844b85594c2] 2025-10-09 09:37:10.771767 | orchestrator | 09:37:10.771 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-10-09 09:37:10.774565 | orchestrator | 09:37:10.774 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-10-09 09:37:10.782386 | orchestrator | 09:37:10.782 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=b8594df3-e172-4a8f-830d-865dfc79f7eb] 2025-10-09 09:37:10.790182 | orchestrator | 09:37:10.790 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-10-09 09:37:10.794935 | orchestrator | 09:37:10.794 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=eebc6f6a604913e3d4085524a17eb8b32b910b5a] 2025-10-09 09:37:10.798947 | orchestrator | 09:37:10.798 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-10-09 09:37:10.807457 | orchestrator | 09:37:10.807 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=6cf84d94-208a-47e1-8626-6f0f49ffb66b] 2025-10-09 09:37:10.895644 | orchestrator | 09:37:10.895 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=cb120539-e45f-4dc2-9fd5-4d9c62406057] 2025-10-09 09:37:11.364752 | orchestrator | 09:37:11.364 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=9ef4751a-e7cc-46a6-97de-8f14706988df] 2025-10-09 09:37:11.641465 | orchestrator | 09:37:11.641 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=5fdcf11b-c49e-4a74-834d-a968fb93f0c9] 2025-10-09 09:37:11.655523 | orchestrator | 09:37:11.655 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-10-09 09:37:14.084100 | orchestrator | 09:37:14.083 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f] 2025-10-09 09:37:14.159443 | orchestrator | 09:37:14.159 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=f028fdc4-497f-4226-9d58-f945a4d4fd39] 2025-10-09 09:37:14.166397 | orchestrator | 09:37:14.166 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=4e0aebcf-e4e3-4bea-9a67-8744fd10874e] 2025-10-09 09:37:14.189995 | orchestrator | 09:37:14.189 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=e6296768-f199-4dcc-a087-8057fc5b5b41] 2025-10-09 09:37:14.191568 | orchestrator | 09:37:14.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=896c45e4-ea0f-4d10-b682-ab8e70dacfed] 2025-10-09 09:37:14.193474 | orchestrator | 09:37:14.193 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=2d400245-ddcf-4a2b-b6fe-2c79382b1633] 2025-10-09 09:37:15.301493 | orchestrator | 09:37:15.301 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=d37f53a5-be33-441d-a412-d5fff1c3824c] 2025-10-09 09:37:15.309593 | orchestrator | 09:37:15.308 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-10-09 09:37:15.309715 | orchestrator | 09:37:15.309 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-10-09 09:37:15.310365 | orchestrator | 09:37:15.310 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-10-09 09:37:15.505933 | orchestrator | 09:37:15.505 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=b8fc9682-ae22-40d6-a719-ba987cf6622c] 2025-10-09 09:37:15.527749 | orchestrator | 09:37:15.527 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-10-09 09:37:15.527836 | orchestrator | 09:37:15.527 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-10-09 09:37:15.528034 | orchestrator | 09:37:15.527 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-10-09 09:37:15.528045 | orchestrator | 09:37:15.527 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-10-09 09:37:15.529653 | orchestrator | 09:37:15.529 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-10-09 09:37:15.529861 | orchestrator | 09:37:15.529 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-10-09 09:37:15.532119 | orchestrator | 09:37:15.531 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-10-09 09:37:15.533049 | orchestrator | 09:37:15.532 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-10-09 09:37:15.666276 | orchestrator | 09:37:15.665 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=fa77977e-36cd-43ac-8376-276ff90e8b5d] 2025-10-09 09:37:15.668790 | orchestrator | 09:37:15.668 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=856a011e-136d-4428-b5f1-51c6ed2f5e01] 2025-10-09 09:37:15.680733 | orchestrator | 09:37:15.680 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-10-09 09:37:15.683595 | orchestrator | 09:37:15.683 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-10-09 09:37:16.112923 | orchestrator | 09:37:16.112 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=5c9a89fc-c216-4933-9806-e6e558fac16d] 2025-10-09 09:37:16.119739 | orchestrator | 09:37:16.119 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-10-09 09:37:16.231173 | orchestrator | 09:37:16.230 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=78ccd999-b9cf-477d-812c-3e991c33c34e] 2025-10-09 09:37:16.237313 | orchestrator | 09:37:16.237 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-10-09 09:37:16.375412 | orchestrator | 09:37:16.375 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=a6e66cb4-f78c-430a-8747-6288581695d6] 2025-10-09 09:37:16.386436 | orchestrator | 09:37:16.385 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-10-09 09:37:16.399404 | orchestrator | 09:37:16.399 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=8542b6e5-743d-45a0-a5a8-6df2e7ebd9aa] 2025-10-09 09:37:16.408458 | orchestrator | 09:37:16.408 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-10-09 09:37:16.522675 | orchestrator | 09:37:16.522 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=3651c63f-2c15-4ca5-9dcd-fa7bb3ab0095] 2025-10-09 09:37:16.529719 | orchestrator | 09:37:16.529 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-10-09 09:37:16.590724 | orchestrator | 09:37:16.590 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=0be86678-2175-4faf-8104-e65f5fc1d882] 2025-10-09 09:37:16.599055 | orchestrator | 09:37:16.598 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-10-09 09:37:16.689982 | orchestrator | 09:37:16.689 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=7286b8cf-24e6-47be-8479-c600c59fc6f6] 2025-10-09 09:37:16.816350 | orchestrator | 09:37:16.815 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5e6e6946-35ea-4f5a-b366-4a709d0656e9] 2025-10-09 09:37:16.845251 | orchestrator | 09:37:16.844 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=f47b4427-b371-4f4b-98de-3a665dd89dd2] 2025-10-09 09:37:17.030545 | orchestrator | 09:37:17.030 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=aa0fc8ab-fd41-41f3-96d9-d0d145ad5de2] 2025-10-09 09:37:17.065058 | orchestrator | 09:37:17.064 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=25ffb944-b5f0-4a24-b7a1-f99d18579047] 2025-10-09 09:37:17.182190 | orchestrator | 09:37:17.181 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=6f6374f6-1998-48df-a6f6-6b55bcb628cb] 2025-10-09 09:37:17.336113 | orchestrator | 09:37:17.335 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=1f0ff5cb-3a0f-4c53-913d-26302888ebac] 2025-10-09 09:37:17.568470 | orchestrator | 09:37:17.567 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=245ba7ac-7efa-4723-b832-05e7cf2c4760] 2025-10-09 09:37:17.729348 | orchestrator | 09:37:17.728 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=1450aaa2-cedb-4ec7-9838-6c153cebce13] 2025-10-09 09:37:18.735287 | orchestrator | 09:37:18.734 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=b4f58eeb-a9c1-4506-be2d-0b295e0e8e3d] 2025-10-09 09:37:18.755695 | orchestrator | 09:37:18.755 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-10-09 09:37:18.764903 | orchestrator | 09:37:18.764 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-10-09 09:37:18.765188 | orchestrator | 09:37:18.765 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-10-09 09:37:18.776669 | orchestrator | 09:37:18.776 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-10-09 09:37:18.776947 | orchestrator | 09:37:18.776 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-10-09 09:37:18.784083 | orchestrator | 09:37:18.783 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-10-09 09:37:18.785430 | orchestrator | 09:37:18.785 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-10-09 09:37:20.271039 | orchestrator | 09:37:20.270 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=708923b1-3071-49b4-b325-c7083bccc4c0] 2025-10-09 09:37:20.283133 | orchestrator | 09:37:20.282 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-10-09 09:37:20.285680 | orchestrator | 09:37:20.285 STDOUT terraform: local_file.inventory: Creating... 2025-10-09 09:37:20.285818 | orchestrator | 09:37:20.285 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-10-09 09:37:20.290428 | orchestrator | 09:37:20.290 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=16dab1c529d49048747e9a9f13d160eee72634c0] 2025-10-09 09:37:20.292744 | orchestrator | 09:37:20.292 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=220f79476b9920c6dfc55ec045135ac975f0378c] 2025-10-09 09:37:21.081280 | orchestrator | 09:37:21.080 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=708923b1-3071-49b4-b325-c7083bccc4c0] 2025-10-09 09:37:28.767347 | orchestrator | 09:37:28.766 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-10-09 09:37:28.767593 | orchestrator | 09:37:28.767 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-10-09 09:37:28.777452 | orchestrator | 09:37:28.777 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-10-09 09:37:28.786172 | orchestrator | 09:37:28.785 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-10-09 09:37:28.787245 | orchestrator | 09:37:28.787 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-10-09 09:37:28.787506 | orchestrator | 09:37:28.787 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-10-09 09:37:38.768797 | orchestrator | 09:37:38.768 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-10-09 09:37:38.769097 | orchestrator | 09:37:38.768 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-10-09 09:37:38.777966 | orchestrator | 09:37:38.777 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-10-09 09:37:38.787151 | orchestrator | 09:37:38.786 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-10-09 09:37:38.788547 | orchestrator | 09:37:38.788 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-10-09 09:37:38.788734 | orchestrator | 09:37:38.788 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-10-09 09:37:48.772088 | orchestrator | 09:37:48.771 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-10-09 09:37:48.772254 | orchestrator | 09:37:48.771 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-10-09 09:37:48.779273 | orchestrator | 09:37:48.778 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-10-09 09:37:48.788425 | orchestrator | 09:37:48.788 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-10-09 09:37:48.788551 | orchestrator | 09:37:48.788 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-10-09 09:37:48.788683 | orchestrator | 09:37:48.788 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-10-09 09:37:49.239318 | orchestrator | 09:37:49.238 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=422cd2ba-20a0-4ede-b737-ef57d06f4e8f] 2025-10-09 09:37:49.541466 | orchestrator | 09:37:49.541 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=31ea7e18-dfb2-46cc-bf75-e2c34ee6efa6] 2025-10-09 09:37:58.775291 | orchestrator | 09:37:58.774 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-10-09 09:37:58.779913 | orchestrator | 09:37:58.779 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-10-09 09:37:58.789232 | orchestrator | 09:37:58.788 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-10-09 09:37:58.789341 | orchestrator | 09:37:58.789 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-10-09 09:37:59.628079 | orchestrator | 09:37:59.627 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=1da5288a-3d24-4894-a39e-2b7da0059e1a] 2025-10-09 09:37:59.640316 | orchestrator | 09:37:59.640 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=236f6bdf-1d37-4517-843e-430b54ae3802] 2025-10-09 09:37:59.688862 | orchestrator | 09:37:59.688 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=b8e8945f-c805-418e-9603-6c359d000905] 2025-10-09 09:37:59.743701 | orchestrator | 09:37:59.743 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=7eed8a77-c304-4363-92e1-d6a158f1d145] 2025-10-09 09:37:59.763124 | orchestrator | 09:37:59.762 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-10-09 09:37:59.770920 | orchestrator | 09:37:59.770 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=964410529690109207] 2025-10-09 09:37:59.773193 | orchestrator | 09:37:59.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-10-09 09:37:59.773522 | orchestrator | 09:37:59.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-10-09 09:37:59.773732 | orchestrator | 09:37:59.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-10-09 09:37:59.776300 | orchestrator | 09:37:59.776 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-10-09 09:37:59.776884 | orchestrator | 09:37:59.776 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-10-09 09:37:59.782901 | orchestrator | 09:37:59.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-10-09 09:37:59.785196 | orchestrator | 09:37:59.785 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-10-09 09:37:59.794149 | orchestrator | 09:37:59.794 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-10-09 09:37:59.797515 | orchestrator | 09:37:59.797 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-10-09 09:37:59.814379 | orchestrator | 09:37:59.814 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-10-09 09:38:03.167262 | orchestrator | 09:38:03.166 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=7eed8a77-c304-4363-92e1-d6a158f1d145/79b0088c-29b1-4160-91bf-334de808a256] 2025-10-09 09:38:03.168832 | orchestrator | 09:38:03.168 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=b8e8945f-c805-418e-9603-6c359d000905/14000628-bc87-4bc6-bf7c-9844b85594c2] 2025-10-09 09:38:03.198734 | orchestrator | 09:38:03.198 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=b8e8945f-c805-418e-9603-6c359d000905/cb120539-e45f-4dc2-9fd5-4d9c62406057] 2025-10-09 09:38:03.201859 | orchestrator | 09:38:03.201 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=422cd2ba-20a0-4ede-b737-ef57d06f4e8f/b8594df3-e172-4a8f-830d-865dfc79f7eb] 2025-10-09 09:38:03.226761 | orchestrator | 09:38:03.226 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=422cd2ba-20a0-4ede-b737-ef57d06f4e8f/5ff7f971-6db7-40fa-8173-f07df227bea5] 2025-10-09 09:38:03.231480 | orchestrator | 09:38:03.231 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=7eed8a77-c304-4363-92e1-d6a158f1d145/a18ef090-7db7-4160-bb62-4e2315b7c41c] 2025-10-09 09:38:09.319746 | orchestrator | 09:38:09.319 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=b8e8945f-c805-418e-9603-6c359d000905/814c6d5d-dde1-4cc2-a34c-0e2957be6b16] 2025-10-09 09:38:09.327130 | orchestrator | 09:38:09.326 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=7eed8a77-c304-4363-92e1-d6a158f1d145/6cf84d94-208a-47e1-8626-6f0f49ffb66b] 2025-10-09 09:38:09.346130 | orchestrator | 09:38:09.345 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=422cd2ba-20a0-4ede-b737-ef57d06f4e8f/aa506c68-74fd-4bd6-80c5-fdf748fe0317] 2025-10-09 09:38:09.815271 | orchestrator | 09:38:09.814 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-10-09 09:38:19.816585 | orchestrator | 09:38:19.816 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-10-09 09:38:20.147565 | orchestrator | 09:38:20.147 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=afce6e28-94c8-4332-871a-4699b44bff82] 2025-10-09 09:38:20.451509 | orchestrator | 09:38:20.451 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-10-09 09:38:20.451598 | orchestrator | 09:38:20.451 STDOUT terraform: Outputs: 2025-10-09 09:38:20.451624 | orchestrator | 09:38:20.451 STDOUT terraform: manager_address = 2025-10-09 09:38:20.451637 | orchestrator | 09:38:20.451 STDOUT terraform: private_key = 2025-10-09 09:38:20.531088 | orchestrator | ok: Runtime: 0:01:18.622090 2025-10-09 09:38:20.566436 | 2025-10-09 09:38:20.566626 | TASK [Fetch manager address] 2025-10-09 09:38:20.998533 | orchestrator | ok 2025-10-09 09:38:21.008978 | 2025-10-09 09:38:21.009126 | TASK [Set manager_host address] 2025-10-09 09:38:21.099932 | orchestrator | ok 2025-10-09 09:38:21.108889 | 2025-10-09 09:38:21.109014 | LOOP [Update ansible collections] 2025-10-09 09:38:21.849802 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-10-09 09:38:21.850195 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-09 09:38:21.850254 | orchestrator | Starting galaxy collection install process 2025-10-09 09:38:21.850294 | orchestrator | Process install dependency map 2025-10-09 09:38:21.850333 | orchestrator | Starting collection install process 2025-10-09 09:38:21.850382 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-10-09 09:38:21.850445 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-10-09 09:38:21.850488 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-10-09 09:38:21.850579 | orchestrator | ok: Item: commons Runtime: 0:00:00.450137 2025-10-09 09:38:22.604007 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-09 09:38:22.604180 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-10-09 09:38:22.604235 | orchestrator | Starting galaxy collection install process 2025-10-09 09:38:22.604275 | orchestrator | Process install dependency map 2025-10-09 09:38:22.604312 | orchestrator | Starting collection install process 2025-10-09 09:38:22.604345 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-10-09 09:38:22.604379 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-10-09 09:38:22.604412 | orchestrator | osism.services:999.0.0 was installed successfully 2025-10-09 09:38:22.604464 | orchestrator | ok: Item: services Runtime: 0:00:00.523484 2025-10-09 09:38:22.621442 | 2025-10-09 09:38:22.621590 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-10-09 09:38:33.565515 | orchestrator | ok 2025-10-09 09:38:33.576118 | 2025-10-09 09:38:33.576243 | TASK [Wait a little longer for the manager so that everything is ready] 2025-10-09 09:39:33.628035 | orchestrator | ok 2025-10-09 09:39:33.639346 | 2025-10-09 09:39:33.639470 | TASK [Fetch manager ssh hostkey] 2025-10-09 09:39:35.211002 | orchestrator | Output suppressed because no_log was given 2025-10-09 09:39:35.226491 | 2025-10-09 09:39:35.226733 | TASK [Get ssh keypair from terraform environment] 2025-10-09 09:39:35.765080 | orchestrator | ok: Runtime: 0:00:00.008134 2025-10-09 09:39:35.780316 | 2025-10-09 09:39:35.780459 | TASK [Point out that the following task takes some time and does not give any output] 2025-10-09 09:39:35.822595 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-10-09 09:39:35.829947 | 2025-10-09 09:39:35.830046 | TASK [Run manager part 0] 2025-10-09 09:39:36.640970 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-09 09:39:36.683856 | orchestrator | 2025-10-09 09:39:36.683903 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-10-09 09:39:36.683910 | orchestrator | 2025-10-09 09:39:36.683922 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-10-09 09:39:38.600348 | orchestrator | ok: [testbed-manager] 2025-10-09 09:39:38.600388 | orchestrator | 2025-10-09 09:39:38.600408 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-10-09 09:39:38.600417 | orchestrator | 2025-10-09 09:39:38.600425 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:39:40.505445 | orchestrator | ok: [testbed-manager] 2025-10-09 09:39:40.505507 | orchestrator | 2025-10-09 09:39:40.505523 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-10-09 09:39:41.161999 | orchestrator | ok: [testbed-manager] 2025-10-09 09:39:41.162171 | orchestrator | 2025-10-09 09:39:41.162181 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-10-09 09:39:41.202390 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:41.202421 | orchestrator | 2025-10-09 09:39:41.202429 | orchestrator | TASK [Update package cache] **************************************************** 2025-10-09 09:39:41.227605 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:41.227632 | orchestrator | 2025-10-09 09:39:41.227638 | orchestrator | TASK [Install required packages] *********************************************** 2025-10-09 09:39:41.250473 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:41.250498 | orchestrator | 2025-10-09 09:39:41.250503 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-10-09 09:39:41.275010 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:41.275034 | orchestrator | 2025-10-09 09:39:41.275040 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-10-09 09:39:41.311197 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:41.311212 | orchestrator | 2025-10-09 09:39:41.311217 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2025-10-09 09:39:41.342511 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:41.342524 | orchestrator | 2025-10-09 09:39:41.342529 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-10-09 09:39:41.374806 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:41.374820 | orchestrator | 2025-10-09 09:39:41.374825 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-10-09 09:39:42.180390 | orchestrator | changed: [testbed-manager] 2025-10-09 09:39:42.180432 | orchestrator | 2025-10-09 09:39:42.180438 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-10-09 09:42:29.900644 | orchestrator | changed: [testbed-manager] 2025-10-09 09:42:29.900707 | orchestrator | 2025-10-09 09:42:29.900722 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-10-09 09:43:50.546448 | orchestrator | changed: [testbed-manager] 2025-10-09 09:43:50.546547 | orchestrator | 2025-10-09 09:43:50.546565 | orchestrator | TASK [Install required packages] *********************************************** 2025-10-09 09:44:14.780964 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:14.781078 | orchestrator | 2025-10-09 09:44:14.781097 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-10-09 09:44:24.391219 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:24.391304 | orchestrator | 2025-10-09 09:44:24.391321 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-10-09 09:44:24.434702 | orchestrator | ok: [testbed-manager] 2025-10-09 09:44:24.434738 | orchestrator | 2025-10-09 09:44:24.434746 | orchestrator | TASK [Get current user] ******************************************************** 2025-10-09 09:44:25.254279 | orchestrator | ok: [testbed-manager] 2025-10-09 09:44:25.254359 | orchestrator | 2025-10-09 09:44:25.254376 | orchestrator | TASK [Create venv directory] *************************************************** 2025-10-09 09:44:26.043472 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:26.043512 | orchestrator | 2025-10-09 09:44:26.043519 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-10-09 09:44:33.020569 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:33.020648 | orchestrator | 2025-10-09 09:44:33.020686 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-10-09 09:44:39.589847 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:39.589888 | orchestrator | 2025-10-09 09:44:39.589898 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-10-09 09:44:42.456686 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:42.456728 | orchestrator | 2025-10-09 09:44:42.456737 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-10-09 09:44:44.273573 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:44.274269 | orchestrator | 2025-10-09 09:44:44.274295 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-10-09 09:44:45.419894 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-10-09 09:44:45.419977 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-10-09 09:44:45.419992 | orchestrator | 2025-10-09 09:44:45.420004 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-10-09 09:44:45.462412 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-10-09 09:44:45.462494 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-10-09 09:44:45.462508 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-10-09 09:44:45.462520 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-10-09 09:44:49.971122 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-10-09 09:44:49.971217 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-10-09 09:44:49.971234 | orchestrator | 2025-10-09 09:44:49.971247 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-10-09 09:44:50.585855 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:50.585894 | orchestrator | 2025-10-09 09:44:50.585902 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-10-09 09:45:09.959553 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-10-09 09:45:09.959645 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-10-09 09:45:09.959664 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-10-09 09:45:09.959677 | orchestrator | 2025-10-09 09:45:09.959689 | orchestrator | TASK [Install local collections] *********************************************** 2025-10-09 09:45:12.409553 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-10-09 09:45:12.409628 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-10-09 09:45:12.409642 | orchestrator | 2025-10-09 09:45:12.409654 | orchestrator | PLAY [Create operator user] **************************************************** 2025-10-09 09:45:12.409666 | orchestrator | 2025-10-09 09:45:12.409678 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:45:13.857473 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:13.857547 | orchestrator | 2025-10-09 09:45:13.857565 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-10-09 09:45:13.907130 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:13.907184 | orchestrator | 2025-10-09 09:45:13.907198 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-10-09 09:45:13.968862 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:13.968892 | orchestrator | 2025-10-09 09:45:13.968900 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-10-09 09:45:14.714533 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:14.714620 | orchestrator | 2025-10-09 09:45:14.714635 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-10-09 09:45:15.486628 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:15.486667 | orchestrator | 2025-10-09 09:45:15.486675 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-10-09 09:45:16.924619 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-10-09 09:45:16.924708 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-10-09 09:45:16.924724 | orchestrator | 2025-10-09 09:45:16.924751 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-10-09 09:45:18.289299 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:18.289437 | orchestrator | 2025-10-09 09:45:18.289454 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-10-09 09:45:20.160149 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 09:45:20.160197 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-10-09 09:45:20.160205 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-10-09 09:45:20.160212 | orchestrator | 2025-10-09 09:45:20.160220 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-10-09 09:45:20.218430 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:20.218475 | orchestrator | 2025-10-09 09:45:20.218485 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-10-09 09:45:20.811217 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:20.811261 | orchestrator | 2025-10-09 09:45:20.811271 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-10-09 09:45:20.884587 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:20.884628 | orchestrator | 2025-10-09 09:45:20.884637 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-10-09 09:45:21.775835 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:45:21.775923 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:21.775942 | orchestrator | 2025-10-09 09:45:21.775956 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-10-09 09:45:21.816347 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:21.816428 | orchestrator | 2025-10-09 09:45:21.816443 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-10-09 09:45:21.852854 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:21.852925 | orchestrator | 2025-10-09 09:45:21.852940 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-10-09 09:45:21.888880 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:21.888934 | orchestrator | 2025-10-09 09:45:21.888946 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-10-09 09:45:21.955092 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:21.955162 | orchestrator | 2025-10-09 09:45:21.955182 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-10-09 09:45:22.704181 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:22.704257 | orchestrator | 2025-10-09 09:45:22.704271 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-10-09 09:45:22.704283 | orchestrator | 2025-10-09 09:45:22.704293 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:45:24.209441 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:24.209479 | orchestrator | 2025-10-09 09:45:24.209484 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-10-09 09:45:25.221961 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:25.221999 | orchestrator | 2025-10-09 09:45:25.222005 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:45:25.222011 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-10-09 09:45:25.222057 | orchestrator | 2025-10-09 09:45:25.569841 | orchestrator | ok: Runtime: 0:05:49.206681 2025-10-09 09:45:25.591785 | 2025-10-09 09:45:25.592014 | TASK [Point out that the log in on the manager is now possible] 2025-10-09 09:45:25.640780 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-10-09 09:45:25.651236 | 2025-10-09 09:45:25.651361 | TASK [Point out that the following task takes some time and does not give any output] 2025-10-09 09:45:25.689813 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-10-09 09:45:25.700075 | 2025-10-09 09:45:25.700195 | TASK [Run manager part 1 + 2] 2025-10-09 09:45:26.768225 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-09 09:45:26.821592 | orchestrator | 2025-10-09 09:45:26.821638 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-10-09 09:45:26.821645 | orchestrator | 2025-10-09 09:45:26.821659 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:45:29.424966 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:29.425005 | orchestrator | 2025-10-09 09:45:29.425022 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-10-09 09:45:29.466817 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:29.466854 | orchestrator | 2025-10-09 09:45:29.466864 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-10-09 09:45:29.515420 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:29.515459 | orchestrator | 2025-10-09 09:45:29.515469 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-09 09:45:29.563408 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:29.563444 | orchestrator | 2025-10-09 09:45:29.563454 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-09 09:45:29.630397 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:29.630433 | orchestrator | 2025-10-09 09:45:29.630443 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-09 09:45:29.683780 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:29.683811 | orchestrator | 2025-10-09 09:45:29.683820 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-09 09:45:29.726439 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-10-09 09:45:29.726461 | orchestrator | 2025-10-09 09:45:29.726466 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-09 09:45:30.420892 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:30.420931 | orchestrator | 2025-10-09 09:45:30.420939 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-09 09:45:30.468669 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:30.468709 | orchestrator | 2025-10-09 09:45:30.468718 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-09 09:45:31.838871 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:31.838915 | orchestrator | 2025-10-09 09:45:31.838924 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-09 09:45:32.411974 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:32.412017 | orchestrator | 2025-10-09 09:45:32.412025 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-09 09:45:33.566698 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:33.566762 | orchestrator | 2025-10-09 09:45:33.566780 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-09 09:45:51.484637 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:51.484732 | orchestrator | 2025-10-09 09:45:51.484749 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-10-09 09:45:52.177984 | orchestrator | ok: [testbed-manager] 2025-10-09 09:45:52.178108 | orchestrator | 2025-10-09 09:45:52.178125 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-10-09 09:45:52.227992 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:45:52.228032 | orchestrator | 2025-10-09 09:45:52.228072 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-10-09 09:45:53.219955 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:53.220198 | orchestrator | 2025-10-09 09:45:53.220220 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-10-09 09:45:54.203000 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:54.203106 | orchestrator | 2025-10-09 09:45:54.203123 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-10-09 09:45:54.799681 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:54.799748 | orchestrator | 2025-10-09 09:45:54.799762 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-10-09 09:45:54.840448 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-10-09 09:45:54.840539 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-10-09 09:45:54.840554 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-10-09 09:45:54.840566 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-10-09 09:45:57.902871 | orchestrator | changed: [testbed-manager] 2025-10-09 09:45:57.902981 | orchestrator | 2025-10-09 09:45:57.902998 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-10-09 09:46:07.300561 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-10-09 09:46:07.300614 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-10-09 09:46:07.300624 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-10-09 09:46:07.300632 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-10-09 09:46:07.300642 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-10-09 09:46:07.300648 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-10-09 09:46:07.300654 | orchestrator | 2025-10-09 09:46:07.300661 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-10-09 09:46:08.361823 | orchestrator | changed: [testbed-manager] 2025-10-09 09:46:08.361884 | orchestrator | 2025-10-09 09:46:08.361899 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-10-09 09:46:08.404871 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:46:08.404935 | orchestrator | 2025-10-09 09:46:08.404952 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-10-09 09:46:11.460798 | orchestrator | changed: [testbed-manager] 2025-10-09 09:46:11.460846 | orchestrator | 2025-10-09 09:46:11.460854 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-10-09 09:46:11.502602 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:46:11.502643 | orchestrator | 2025-10-09 09:46:11.502652 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-10-09 09:48:00.111591 | orchestrator | changed: [testbed-manager] 2025-10-09 09:48:00.111637 | orchestrator | 2025-10-09 09:48:00.111644 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-09 09:48:01.373088 | orchestrator | ok: [testbed-manager] 2025-10-09 09:48:01.373206 | orchestrator | 2025-10-09 09:48:01.373223 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:48:01.373236 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-10-09 09:48:01.373248 | orchestrator | 2025-10-09 09:48:01.817233 | orchestrator | ok: Runtime: 0:02:35.490462 2025-10-09 09:48:01.832764 | 2025-10-09 09:48:01.832900 | TASK [Reboot manager] 2025-10-09 09:48:03.366642 | orchestrator | ok: Runtime: 0:00:01.012193 2025-10-09 09:48:03.382477 | 2025-10-09 09:48:03.382687 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-10-09 09:48:19.828818 | orchestrator | ok 2025-10-09 09:48:19.838886 | 2025-10-09 09:48:19.839004 | TASK [Wait a little longer for the manager so that everything is ready] 2025-10-09 09:49:19.877393 | orchestrator | ok 2025-10-09 09:49:19.886742 | 2025-10-09 09:49:19.886891 | TASK [Deploy manager + bootstrap nodes] 2025-10-09 09:49:22.687141 | orchestrator | 2025-10-09 09:49:22.687331 | orchestrator | # DEPLOY MANAGER 2025-10-09 09:49:22.687355 | orchestrator | 2025-10-09 09:49:22.687370 | orchestrator | + set -e 2025-10-09 09:49:22.687384 | orchestrator | + echo 2025-10-09 09:49:22.687398 | orchestrator | + echo '# DEPLOY MANAGER' 2025-10-09 09:49:22.687416 | orchestrator | + echo 2025-10-09 09:49:22.687465 | orchestrator | + cat /opt/manager-vars.sh 2025-10-09 09:49:22.690930 | orchestrator | export NUMBER_OF_NODES=6 2025-10-09 09:49:22.690956 | orchestrator | 2025-10-09 09:49:22.690969 | orchestrator | export CEPH_VERSION=reef 2025-10-09 09:49:22.690982 | orchestrator | export CONFIGURATION_VERSION=main 2025-10-09 09:49:22.690995 | orchestrator | export MANAGER_VERSION=9.4.0 2025-10-09 09:49:22.691034 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-10-09 09:49:22.691046 | orchestrator | 2025-10-09 09:49:22.691065 | orchestrator | export ARA=false 2025-10-09 09:49:22.691077 | orchestrator | export DEPLOY_MODE=manager 2025-10-09 09:49:22.691094 | orchestrator | export TEMPEST=false 2025-10-09 09:49:22.691106 | orchestrator | export IS_ZUUL=true 2025-10-09 09:49:22.691117 | orchestrator | 2025-10-09 09:49:22.691135 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 09:49:22.691147 | orchestrator | export EXTERNAL_API=false 2025-10-09 09:49:22.691158 | orchestrator | 2025-10-09 09:49:22.691169 | orchestrator | export IMAGE_USER=ubuntu 2025-10-09 09:49:22.691184 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-10-09 09:49:22.691195 | orchestrator | 2025-10-09 09:49:22.691206 | orchestrator | export CEPH_STACK=ceph-ansible 2025-10-09 09:49:22.691223 | orchestrator | 2025-10-09 09:49:22.691234 | orchestrator | + echo 2025-10-09 09:49:22.691251 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 09:49:22.692662 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 09:49:22.692680 | orchestrator | ++ INTERACTIVE=false 2025-10-09 09:49:22.692693 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 09:49:22.692709 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 09:49:22.693154 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 09:49:22.693169 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 09:49:22.693418 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 09:49:22.693438 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 09:49:22.693449 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 09:49:22.693460 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 09:49:22.693472 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 09:49:22.693483 | orchestrator | ++ export MANAGER_VERSION=9.4.0 2025-10-09 09:49:22.693494 | orchestrator | ++ MANAGER_VERSION=9.4.0 2025-10-09 09:49:22.693505 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 09:49:22.693525 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 09:49:22.693536 | orchestrator | ++ export ARA=false 2025-10-09 09:49:22.693551 | orchestrator | ++ ARA=false 2025-10-09 09:49:22.693566 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 09:49:22.693582 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 09:49:22.693593 | orchestrator | ++ export TEMPEST=false 2025-10-09 09:49:22.693603 | orchestrator | ++ TEMPEST=false 2025-10-09 09:49:22.693614 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 09:49:22.693625 | orchestrator | ++ IS_ZUUL=true 2025-10-09 09:49:22.693636 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 09:49:22.693647 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 09:49:22.693658 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 09:49:22.693669 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 09:49:22.693679 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 09:49:22.693690 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 09:49:22.693701 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 09:49:22.693715 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 09:49:22.693733 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 09:49:22.693749 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 09:49:22.693760 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-10-09 09:49:22.759324 | orchestrator | + docker version 2025-10-09 09:49:23.045798 | orchestrator | Client: Docker Engine - Community 2025-10-09 09:49:23.045880 | orchestrator | Version: 27.5.1 2025-10-09 09:49:23.045895 | orchestrator | API version: 1.47 2025-10-09 09:49:23.045907 | orchestrator | Go version: go1.22.11 2025-10-09 09:49:23.045918 | orchestrator | Git commit: 9f9e405 2025-10-09 09:49:23.045929 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-10-09 09:49:23.045940 | orchestrator | OS/Arch: linux/amd64 2025-10-09 09:49:23.045951 | orchestrator | Context: default 2025-10-09 09:49:23.045962 | orchestrator | 2025-10-09 09:49:23.045973 | orchestrator | Server: Docker Engine - Community 2025-10-09 09:49:23.045984 | orchestrator | Engine: 2025-10-09 09:49:23.045996 | orchestrator | Version: 27.5.1 2025-10-09 09:49:23.046007 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-10-09 09:49:23.046111 | orchestrator | Go version: go1.22.11 2025-10-09 09:49:23.046127 | orchestrator | Git commit: 4c9b3b0 2025-10-09 09:49:23.046138 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-10-09 09:49:23.046149 | orchestrator | OS/Arch: linux/amd64 2025-10-09 09:49:23.046159 | orchestrator | Experimental: false 2025-10-09 09:49:23.046170 | orchestrator | containerd: 2025-10-09 09:49:23.046182 | orchestrator | Version: v1.7.28 2025-10-09 09:49:23.046193 | orchestrator | GitCommit: b98a3aace656320842a23f4a392a33f46af97866 2025-10-09 09:49:23.046204 | orchestrator | runc: 2025-10-09 09:49:23.046215 | orchestrator | Version: 1.3.0 2025-10-09 09:49:23.046226 | orchestrator | GitCommit: v1.3.0-0-g4ca628d1 2025-10-09 09:49:23.046237 | orchestrator | docker-init: 2025-10-09 09:49:23.046247 | orchestrator | Version: 0.19.0 2025-10-09 09:49:23.046259 | orchestrator | GitCommit: de40ad0 2025-10-09 09:49:23.048493 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-10-09 09:49:23.056820 | orchestrator | + set -e 2025-10-09 09:49:23.056840 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 09:49:23.056852 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 09:49:23.056863 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 09:49:23.056874 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 09:49:23.056885 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 09:49:23.056896 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 09:49:23.056907 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 09:49:23.056918 | orchestrator | ++ export MANAGER_VERSION=9.4.0 2025-10-09 09:49:23.056929 | orchestrator | ++ MANAGER_VERSION=9.4.0 2025-10-09 09:49:23.056940 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 09:49:23.056951 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 09:49:23.056962 | orchestrator | ++ export ARA=false 2025-10-09 09:49:23.056973 | orchestrator | ++ ARA=false 2025-10-09 09:49:23.056984 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 09:49:23.056995 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 09:49:23.057005 | orchestrator | ++ export TEMPEST=false 2025-10-09 09:49:23.057039 | orchestrator | ++ TEMPEST=false 2025-10-09 09:49:23.057051 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 09:49:23.057062 | orchestrator | ++ IS_ZUUL=true 2025-10-09 09:49:23.057073 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 09:49:23.057084 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 09:49:23.057095 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 09:49:23.057106 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 09:49:23.057116 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 09:49:23.057127 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 09:49:23.057138 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 09:49:23.057149 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 09:49:23.057160 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 09:49:23.057171 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 09:49:23.057182 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 09:49:23.057193 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 09:49:23.057204 | orchestrator | ++ INTERACTIVE=false 2025-10-09 09:49:23.057214 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 09:49:23.057228 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 09:49:23.057239 | orchestrator | + [[ 9.4.0 != \l\a\t\e\s\t ]] 2025-10-09 09:49:23.057250 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.4.0 2025-10-09 09:49:23.063001 | orchestrator | + set -e 2025-10-09 09:49:23.063055 | orchestrator | + VERSION=9.4.0 2025-10-09 09:49:23.063069 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.4.0/g' /opt/configuration/environments/manager/configuration.yml 2025-10-09 09:49:23.073224 | orchestrator | + [[ 9.4.0 != \l\a\t\e\s\t ]] 2025-10-09 09:49:23.073246 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-10-09 09:49:23.076237 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-10-09 09:49:23.079920 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-10-09 09:49:23.085039 | orchestrator | + set -e 2025-10-09 09:49:23.085291 | orchestrator | /opt/configuration ~ 2025-10-09 09:49:23.085376 | orchestrator | + pushd /opt/configuration 2025-10-09 09:49:23.085391 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-10-09 09:49:23.086536 | orchestrator | + source /opt/venv/bin/activate 2025-10-09 09:49:23.088369 | orchestrator | ++ deactivate nondestructive 2025-10-09 09:49:23.088403 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:49:23.088419 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:49:23.088455 | orchestrator | ++ hash -r 2025-10-09 09:49:23.088470 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:49:23.088483 | orchestrator | ++ unset VIRTUAL_ENV 2025-10-09 09:49:23.088496 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-10-09 09:49:23.088508 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-10-09 09:49:23.088522 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-10-09 09:49:23.088534 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-10-09 09:49:23.088547 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-10-09 09:49:23.088560 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-10-09 09:49:23.088571 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:49:23.088583 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:49:23.088594 | orchestrator | ++ export PATH 2025-10-09 09:49:23.088622 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:49:23.088644 | orchestrator | ++ '[' -z '' ']' 2025-10-09 09:49:23.088655 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-10-09 09:49:23.088666 | orchestrator | ++ PS1='(venv) ' 2025-10-09 09:49:23.088677 | orchestrator | ++ export PS1 2025-10-09 09:49:23.088688 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-10-09 09:49:23.088698 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-10-09 09:49:23.088709 | orchestrator | ++ hash -r 2025-10-09 09:49:23.088720 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-10-09 09:49:24.455803 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-10-09 09:49:24.456734 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-10-09 09:49:24.458468 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-10-09 09:49:24.459860 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2025-10-09 09:49:24.461005 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-10-09 09:49:24.471653 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.0) 2025-10-09 09:49:24.473407 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-10-09 09:49:24.474485 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-10-09 09:49:24.475819 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-10-09 09:49:24.512465 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-10-09 09:49:24.513948 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-10-09 09:49:24.515883 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-10-09 09:49:24.517166 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.10.5) 2025-10-09 09:49:24.521433 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2025-10-09 09:49:24.754916 | orchestrator | ++ which gilt 2025-10-09 09:49:24.757459 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-10-09 09:49:24.757478 | orchestrator | + /opt/venv/bin/gilt overlay 2025-10-09 09:49:25.016592 | orchestrator | osism.cfg-generics: 2025-10-09 09:49:25.191094 | orchestrator | - copied (v0.20250927.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-10-09 09:49:25.192754 | orchestrator | - copied (v0.20250927.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-10-09 09:49:25.193715 | orchestrator | - copied (v0.20250927.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-10-09 09:49:25.193737 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-10-09 09:49:26.001667 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-10-09 09:49:26.014182 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-10-09 09:49:26.546330 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-10-09 09:49:26.600064 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-10-09 09:49:26.600146 | orchestrator | + deactivate 2025-10-09 09:49:26.600162 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-10-09 09:49:26.600177 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:49:26.600197 | orchestrator | + export PATH 2025-10-09 09:49:26.600221 | orchestrator | ~ 2025-10-09 09:49:26.600233 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-10-09 09:49:26.600246 | orchestrator | + '[' -n '' ']' 2025-10-09 09:49:26.600259 | orchestrator | + hash -r 2025-10-09 09:49:26.600270 | orchestrator | + '[' -n '' ']' 2025-10-09 09:49:26.600281 | orchestrator | + unset VIRTUAL_ENV 2025-10-09 09:49:26.600292 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-10-09 09:49:26.600303 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-10-09 09:49:26.600314 | orchestrator | + unset -f deactivate 2025-10-09 09:49:26.600325 | orchestrator | + popd 2025-10-09 09:49:26.602277 | orchestrator | + [[ 9.4.0 == \l\a\t\e\s\t ]] 2025-10-09 09:49:26.602300 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-10-09 09:49:26.602428 | orchestrator | ++ semver 9.4.0 7.0.0 2025-10-09 09:49:26.663605 | orchestrator | + [[ 1 -ge 0 ]] 2025-10-09 09:49:26.663661 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-10-09 09:49:26.663674 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-10-09 09:49:26.760247 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-10-09 09:49:26.760312 | orchestrator | + source /opt/venv/bin/activate 2025-10-09 09:49:26.760333 | orchestrator | ++ deactivate nondestructive 2025-10-09 09:49:26.760346 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:49:26.760365 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:49:26.760385 | orchestrator | ++ hash -r 2025-10-09 09:49:26.760396 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:49:26.760409 | orchestrator | ++ unset VIRTUAL_ENV 2025-10-09 09:49:26.760426 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-10-09 09:49:26.760443 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-10-09 09:49:26.760467 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-10-09 09:49:26.760486 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-10-09 09:49:26.760497 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-10-09 09:49:26.760509 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-10-09 09:49:26.760776 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:49:26.760799 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:49:26.760842 | orchestrator | ++ export PATH 2025-10-09 09:49:26.760857 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:49:26.760875 | orchestrator | ++ '[' -z '' ']' 2025-10-09 09:49:26.760893 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-10-09 09:49:26.760911 | orchestrator | ++ PS1='(venv) ' 2025-10-09 09:49:26.760928 | orchestrator | ++ export PS1 2025-10-09 09:49:26.760947 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-10-09 09:49:26.760966 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-10-09 09:49:26.760985 | orchestrator | ++ hash -r 2025-10-09 09:49:26.761003 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-10-09 09:49:28.138490 | orchestrator | 2025-10-09 09:49:28.138591 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-10-09 09:49:28.138608 | orchestrator | 2025-10-09 09:49:28.138620 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-09 09:49:28.750717 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:28.750806 | orchestrator | 2025-10-09 09:49:28.750820 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-10-09 09:49:29.849461 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:29.849555 | orchestrator | 2025-10-09 09:49:29.849569 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-10-09 09:49:29.849581 | orchestrator | 2025-10-09 09:49:29.849591 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:49:32.358765 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:32.358868 | orchestrator | 2025-10-09 09:49:32.358884 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-10-09 09:49:32.414724 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:32.414805 | orchestrator | 2025-10-09 09:49:32.414820 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-10-09 09:49:32.913471 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:32.913552 | orchestrator | 2025-10-09 09:49:32.913568 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-10-09 09:49:32.949427 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:49:32.949451 | orchestrator | 2025-10-09 09:49:32.949463 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-10-09 09:49:33.314720 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:33.314802 | orchestrator | 2025-10-09 09:49:33.314815 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-10-09 09:49:33.376069 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:49:33.376104 | orchestrator | 2025-10-09 09:49:33.376116 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-10-09 09:49:33.747383 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:33.747477 | orchestrator | 2025-10-09 09:49:33.747494 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-10-09 09:49:33.885834 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:49:33.885877 | orchestrator | 2025-10-09 09:49:33.885891 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-10-09 09:49:33.885902 | orchestrator | 2025-10-09 09:49:33.885914 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:49:35.703602 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:35.703695 | orchestrator | 2025-10-09 09:49:35.703710 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-10-09 09:49:35.833465 | orchestrator | included: osism.services.traefik for testbed-manager 2025-10-09 09:49:35.833498 | orchestrator | 2025-10-09 09:49:35.833511 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-10-09 09:49:35.904621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-10-09 09:49:35.904651 | orchestrator | 2025-10-09 09:49:35.904663 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-10-09 09:49:37.077890 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-10-09 09:49:37.077974 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-10-09 09:49:37.077990 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-10-09 09:49:37.078002 | orchestrator | 2025-10-09 09:49:37.078083 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-10-09 09:49:39.026709 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-10-09 09:49:39.026811 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-10-09 09:49:39.026826 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-10-09 09:49:39.026839 | orchestrator | 2025-10-09 09:49:39.026852 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-10-09 09:49:39.709602 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:49:39.709694 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:39.709709 | orchestrator | 2025-10-09 09:49:39.709721 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-10-09 09:49:40.397519 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:49:40.397630 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:40.397648 | orchestrator | 2025-10-09 09:49:40.397661 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-10-09 09:49:40.459446 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:49:40.459518 | orchestrator | 2025-10-09 09:49:40.459532 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-10-09 09:49:40.852660 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:40.852762 | orchestrator | 2025-10-09 09:49:40.852780 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-10-09 09:49:40.932525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-10-09 09:49:40.932600 | orchestrator | 2025-10-09 09:49:40.932610 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-10-09 09:49:42.094794 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:42.094888 | orchestrator | 2025-10-09 09:49:42.094904 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-10-09 09:49:42.970695 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:42.970791 | orchestrator | 2025-10-09 09:49:42.970808 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-10-09 09:49:54.039265 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:54.039429 | orchestrator | 2025-10-09 09:49:54.039476 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-10-09 09:49:54.103114 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:49:54.103192 | orchestrator | 2025-10-09 09:49:54.103208 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-10-09 09:49:54.103221 | orchestrator | 2025-10-09 09:49:54.103233 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:49:56.032965 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:56.033062 | orchestrator | 2025-10-09 09:49:56.033077 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-10-09 09:49:56.157875 | orchestrator | included: osism.services.manager for testbed-manager 2025-10-09 09:49:56.157908 | orchestrator | 2025-10-09 09:49:56.157919 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-10-09 09:49:56.230140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 09:49:56.230174 | orchestrator | 2025-10-09 09:49:56.230184 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-10-09 09:49:59.155102 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:59.155207 | orchestrator | 2025-10-09 09:49:59.155223 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-10-09 09:49:59.201730 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:59.201761 | orchestrator | 2025-10-09 09:49:59.201774 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-10-09 09:49:59.331617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-10-09 09:49:59.331676 | orchestrator | 2025-10-09 09:49:59.331693 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-10-09 09:50:02.477796 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-10-09 09:50:02.477879 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-10-09 09:50:02.477910 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-10-09 09:50:02.477932 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-10-09 09:50:02.477944 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-10-09 09:50:02.477955 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-10-09 09:50:02.477966 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-10-09 09:50:02.477978 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-10-09 09:50:02.478085 | orchestrator | 2025-10-09 09:50:02.478104 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-10-09 09:50:03.194451 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:03.194531 | orchestrator | 2025-10-09 09:50:03.194547 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-10-09 09:50:03.915959 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:03.916053 | orchestrator | 2025-10-09 09:50:03.916064 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-10-09 09:50:04.009758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-10-09 09:50:04.009813 | orchestrator | 2025-10-09 09:50:04.009823 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-10-09 09:50:05.285917 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-10-09 09:50:05.286098 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-10-09 09:50:05.286117 | orchestrator | 2025-10-09 09:50:05.286130 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-10-09 09:50:05.974326 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:05.974405 | orchestrator | 2025-10-09 09:50:05.974421 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-10-09 09:50:06.030570 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:50:06.030638 | orchestrator | 2025-10-09 09:50:06.030650 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-10-09 09:50:06.122528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-10-09 09:50:06.122587 | orchestrator | 2025-10-09 09:50:06.122603 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-10-09 09:50:06.796979 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:06.797093 | orchestrator | 2025-10-09 09:50:06.797107 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-10-09 09:50:06.872514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-10-09 09:50:06.872577 | orchestrator | 2025-10-09 09:50:06.872593 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-10-09 09:50:08.356888 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:50:08.356971 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:50:08.357028 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:08.357041 | orchestrator | 2025-10-09 09:50:08.357053 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-10-09 09:50:09.053017 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:09.053092 | orchestrator | 2025-10-09 09:50:09.053107 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-10-09 09:50:09.118922 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:50:09.118955 | orchestrator | 2025-10-09 09:50:09.118969 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-10-09 09:50:09.217480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-10-09 09:50:09.217509 | orchestrator | 2025-10-09 09:50:09.217522 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-10-09 09:50:09.791829 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:09.791910 | orchestrator | 2025-10-09 09:50:09.791925 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-10-09 09:50:10.227918 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:10.228037 | orchestrator | 2025-10-09 09:50:10.228064 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-10-09 09:50:11.554211 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-10-09 09:50:11.554273 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-10-09 09:50:11.554283 | orchestrator | 2025-10-09 09:50:11.554292 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-10-09 09:50:12.235422 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:12.235501 | orchestrator | 2025-10-09 09:50:12.235513 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-10-09 09:50:12.660342 | orchestrator | ok: [testbed-manager] 2025-10-09 09:50:12.660441 | orchestrator | 2025-10-09 09:50:12.660457 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-10-09 09:50:13.044550 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:13.044703 | orchestrator | 2025-10-09 09:50:13.044721 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-10-09 09:50:13.087120 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:50:13.087171 | orchestrator | 2025-10-09 09:50:13.087183 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-10-09 09:50:13.165728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-10-09 09:50:13.165790 | orchestrator | 2025-10-09 09:50:13.165802 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-10-09 09:50:13.213255 | orchestrator | ok: [testbed-manager] 2025-10-09 09:50:13.213301 | orchestrator | 2025-10-09 09:50:13.213312 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-10-09 09:50:15.359225 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-10-09 09:50:15.359313 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-10-09 09:50:15.359329 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-10-09 09:50:15.359342 | orchestrator | 2025-10-09 09:50:15.359356 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-10-09 09:50:16.112345 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:16.112455 | orchestrator | 2025-10-09 09:50:16.112472 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-10-09 09:50:16.880759 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:16.880856 | orchestrator | 2025-10-09 09:50:16.880871 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-10-09 09:50:17.649155 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:17.649249 | orchestrator | 2025-10-09 09:50:17.649265 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-10-09 09:50:17.733519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-10-09 09:50:17.733556 | orchestrator | 2025-10-09 09:50:17.733569 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-10-09 09:50:17.789452 | orchestrator | ok: [testbed-manager] 2025-10-09 09:50:17.789501 | orchestrator | 2025-10-09 09:50:17.789513 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-10-09 09:50:18.596236 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-10-09 09:50:18.596329 | orchestrator | 2025-10-09 09:50:18.596344 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-10-09 09:50:18.682410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-10-09 09:50:18.682496 | orchestrator | 2025-10-09 09:50:18.682511 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-10-09 09:50:19.454078 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:19.454179 | orchestrator | 2025-10-09 09:50:19.454196 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-10-09 09:50:20.081687 | orchestrator | ok: [testbed-manager] 2025-10-09 09:50:20.081777 | orchestrator | 2025-10-09 09:50:20.081792 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-10-09 09:50:20.144137 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:50:20.144172 | orchestrator | 2025-10-09 09:50:20.144184 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-10-09 09:50:20.190506 | orchestrator | ok: [testbed-manager] 2025-10-09 09:50:20.190533 | orchestrator | 2025-10-09 09:50:20.190544 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-10-09 09:50:21.046127 | orchestrator | changed: [testbed-manager] 2025-10-09 09:50:21.046221 | orchestrator | 2025-10-09 09:50:21.046236 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-10-09 09:51:36.153592 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:36.153706 | orchestrator | 2025-10-09 09:51:36.153722 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-10-09 09:51:37.232619 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:37.232724 | orchestrator | 2025-10-09 09:51:37.232740 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-10-09 09:51:37.284244 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:37.284310 | orchestrator | 2025-10-09 09:51:37.284324 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-10-09 09:51:40.761853 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:40.762007 | orchestrator | 2025-10-09 09:51:40.762081 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-10-09 09:51:40.841607 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:40.841651 | orchestrator | 2025-10-09 09:51:40.841663 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-10-09 09:51:40.841675 | orchestrator | 2025-10-09 09:51:40.841687 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-10-09 09:51:40.891881 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:40.891935 | orchestrator | 2025-10-09 09:51:40.891948 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-10-09 09:52:40.943917 | orchestrator | Pausing for 60 seconds 2025-10-09 09:52:40.944041 | orchestrator | changed: [testbed-manager] 2025-10-09 09:52:40.944057 | orchestrator | 2025-10-09 09:52:40.944071 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-10-09 09:52:46.184489 | orchestrator | changed: [testbed-manager] 2025-10-09 09:52:46.184617 | orchestrator | 2025-10-09 09:52:46.184637 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-10-09 09:53:48.583427 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-10-09 09:53:48.583570 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-10-09 09:53:48.583589 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-10-09 09:53:48.583602 | orchestrator | changed: [testbed-manager] 2025-10-09 09:53:48.583616 | orchestrator | 2025-10-09 09:53:48.583629 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-10-09 09:53:59.556184 | orchestrator | changed: [testbed-manager] 2025-10-09 09:53:59.556312 | orchestrator | 2025-10-09 09:53:59.556338 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-10-09 09:53:59.642244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-10-09 09:53:59.642330 | orchestrator | 2025-10-09 09:53:59.642345 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-10-09 09:53:59.642357 | orchestrator | 2025-10-09 09:53:59.642369 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-10-09 09:53:59.699377 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:53:59.699424 | orchestrator | 2025-10-09 09:53:59.699438 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-10-09 09:53:59.787173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-10-09 09:53:59.787246 | orchestrator | 2025-10-09 09:53:59.787265 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-10-09 09:54:00.582508 | orchestrator | changed: [testbed-manager] 2025-10-09 09:54:00.582609 | orchestrator | 2025-10-09 09:54:00.582624 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-10-09 09:54:04.440287 | orchestrator | ok: [testbed-manager] 2025-10-09 09:54:04.440397 | orchestrator | 2025-10-09 09:54:04.440415 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-10-09 09:54:04.514978 | orchestrator | ok: [testbed-manager] => { 2025-10-09 09:54:04.515064 | orchestrator | "version_check_result.stdout_lines": [ 2025-10-09 09:54:04.515080 | orchestrator | "=== OSISM Container Version Check ===", 2025-10-09 09:54:04.515092 | orchestrator | "Checking running containers against expected versions...", 2025-10-09 09:54:04.515105 | orchestrator | "", 2025-10-09 09:54:04.515117 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-10-09 09:54:04.515129 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20250927.0", 2025-10-09 09:54:04.515140 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515151 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20250927.0", 2025-10-09 09:54:04.515163 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515174 | orchestrator | "", 2025-10-09 09:54:04.515186 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-10-09 09:54:04.515220 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20250927.0", 2025-10-09 09:54:04.515232 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515243 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20250927.0", 2025-10-09 09:54:04.515254 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515265 | orchestrator | "", 2025-10-09 09:54:04.515276 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-10-09 09:54:04.515287 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20250927.0", 2025-10-09 09:54:04.515298 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515309 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20250927.0", 2025-10-09 09:54:04.515319 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515330 | orchestrator | "", 2025-10-09 09:54:04.515341 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-10-09 09:54:04.515352 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20250927.0", 2025-10-09 09:54:04.515363 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515377 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20250927.0", 2025-10-09 09:54:04.515388 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515399 | orchestrator | "", 2025-10-09 09:54:04.515410 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-10-09 09:54:04.515421 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20250928.0", 2025-10-09 09:54:04.515431 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515442 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20250928.0", 2025-10-09 09:54:04.515453 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515464 | orchestrator | "", 2025-10-09 09:54:04.515475 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-10-09 09:54:04.515486 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.515496 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515507 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.515519 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515532 | orchestrator | "", 2025-10-09 09:54:04.515545 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-10-09 09:54:04.515558 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-10-09 09:54:04.515572 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515584 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-10-09 09:54:04.515597 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515609 | orchestrator | "", 2025-10-09 09:54:04.515621 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-10-09 09:54:04.515635 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-10-09 09:54:04.515646 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515656 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-10-09 09:54:04.515667 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515678 | orchestrator | "", 2025-10-09 09:54:04.515689 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-10-09 09:54:04.515700 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20250919.0", 2025-10-09 09:54:04.515711 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515721 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20250919.0", 2025-10-09 09:54:04.515732 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515743 | orchestrator | "", 2025-10-09 09:54:04.515754 | orchestrator | "Checking service: redis (Redis Cache)", 2025-10-09 09:54:04.515765 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-10-09 09:54:04.515813 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515825 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-10-09 09:54:04.515836 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515847 | orchestrator | "", 2025-10-09 09:54:04.515858 | orchestrator | "Checking service: api (OSISM API Service)", 2025-10-09 09:54:04.515877 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.515888 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515899 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.515910 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515921 | orchestrator | "", 2025-10-09 09:54:04.515932 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-10-09 09:54:04.515943 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.515953 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.515964 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.515975 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.515986 | orchestrator | "", 2025-10-09 09:54:04.515998 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-10-09 09:54:04.516009 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.516020 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.516031 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.516042 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.516053 | orchestrator | "", 2025-10-09 09:54:04.516064 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-10-09 09:54:04.516074 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.516085 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.516096 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.516124 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.516135 | orchestrator | "", 2025-10-09 09:54:04.516156 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-10-09 09:54:04.516167 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.516178 | orchestrator | " Enabled: true", 2025-10-09 09:54:04.516189 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20250919.0", 2025-10-09 09:54:04.516200 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:54:04.516211 | orchestrator | "", 2025-10-09 09:54:04.516222 | orchestrator | "=== Summary ===", 2025-10-09 09:54:04.516233 | orchestrator | "Errors (version mismatches): 0", 2025-10-09 09:54:04.516244 | orchestrator | "Warnings (expected containers not running): 0", 2025-10-09 09:54:04.516256 | orchestrator | "", 2025-10-09 09:54:04.516267 | orchestrator | "✅ All running containers match expected versions!" 2025-10-09 09:54:04.516278 | orchestrator | ] 2025-10-09 09:54:04.516289 | orchestrator | } 2025-10-09 09:54:04.516300 | orchestrator | 2025-10-09 09:54:04.516312 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-10-09 09:54:04.565594 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:54:04.565676 | orchestrator | 2025-10-09 09:54:04.565690 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:54:04.565703 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-10-09 09:54:04.565714 | orchestrator | 2025-10-09 09:54:04.648318 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-10-09 09:54:04.648439 | orchestrator | + deactivate 2025-10-09 09:54:04.648455 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-10-09 09:54:04.648469 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:54:04.648480 | orchestrator | + export PATH 2025-10-09 09:54:04.648492 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-10-09 09:54:04.648504 | orchestrator | + '[' -n '' ']' 2025-10-09 09:54:04.648516 | orchestrator | + hash -r 2025-10-09 09:54:04.648527 | orchestrator | + '[' -n '' ']' 2025-10-09 09:54:04.648538 | orchestrator | + unset VIRTUAL_ENV 2025-10-09 09:54:04.648549 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-10-09 09:54:04.648560 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-10-09 09:54:04.648571 | orchestrator | + unset -f deactivate 2025-10-09 09:54:04.648583 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-10-09 09:54:04.656702 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-09 09:54:04.656728 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-10-09 09:54:04.656768 | orchestrator | + local max_attempts=60 2025-10-09 09:54:04.656812 | orchestrator | + local name=ceph-ansible 2025-10-09 09:54:04.656823 | orchestrator | + local attempt_num=1 2025-10-09 09:54:04.657945 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 09:54:04.701728 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 09:54:04.701801 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-10-09 09:54:04.701815 | orchestrator | + local max_attempts=60 2025-10-09 09:54:04.701829 | orchestrator | + local name=kolla-ansible 2025-10-09 09:54:04.701842 | orchestrator | + local attempt_num=1 2025-10-09 09:54:04.702546 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-10-09 09:54:04.736102 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 09:54:04.736133 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-10-09 09:54:04.736144 | orchestrator | + local max_attempts=60 2025-10-09 09:54:04.736156 | orchestrator | + local name=osism-ansible 2025-10-09 09:54:04.736168 | orchestrator | + local attempt_num=1 2025-10-09 09:54:04.736500 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-10-09 09:54:04.763849 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 09:54:04.763877 | orchestrator | + [[ true == \t\r\u\e ]] 2025-10-09 09:54:04.763888 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-10-09 09:54:05.422165 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-10-09 09:54:05.642747 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-10-09 09:54:05.642851 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250927.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-10-09 09:54:05.642868 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250928.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-10-09 09:54:05.642880 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-10-09 09:54:05.642894 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-10-09 09:54:05.642931 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:54:05.642943 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:54:05.642954 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250927.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-10-09 09:54:05.642966 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:54:05.642977 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-10-09 09:54:05.642988 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:54:05.642999 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-10-09 09:54:05.643011 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250927.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-10-09 09:54:05.643051 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20250919.0 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-10-09 09:54:05.643063 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250927.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-10-09 09:54:05.643074 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:54:05.651767 | orchestrator | ++ semver 9.4.0 7.0.0 2025-10-09 09:54:05.707623 | orchestrator | + [[ 1 -ge 0 ]] 2025-10-09 09:54:05.707651 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-10-09 09:54:05.712707 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-10-09 09:54:17.829044 | orchestrator | 2025-10-09 09:54:17 | INFO  | Task 37f46aab-46b1-432d-9338-215b326a4240 (resolvconf) was prepared for execution. 2025-10-09 09:54:17.829192 | orchestrator | 2025-10-09 09:54:17 | INFO  | It takes a moment until task 37f46aab-46b1-432d-9338-215b326a4240 (resolvconf) has been started and output is visible here. 2025-10-09 09:54:32.440690 | orchestrator | 2025-10-09 09:54:32.440908 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-10-09 09:54:32.440927 | orchestrator | 2025-10-09 09:54:32.440940 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:54:32.440952 | orchestrator | Thursday 09 October 2025 09:54:22 +0000 (0:00:00.166) 0:00:00.166 ****** 2025-10-09 09:54:32.440966 | orchestrator | ok: [testbed-manager] 2025-10-09 09:54:32.440990 | orchestrator | 2025-10-09 09:54:32.441003 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-10-09 09:54:32.441016 | orchestrator | Thursday 09 October 2025 09:54:26 +0000 (0:00:04.038) 0:00:04.204 ****** 2025-10-09 09:54:32.441027 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:54:32.441041 | orchestrator | 2025-10-09 09:54:32.441052 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-10-09 09:54:32.441063 | orchestrator | Thursday 09 October 2025 09:54:26 +0000 (0:00:00.070) 0:00:04.275 ****** 2025-10-09 09:54:32.441074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-10-09 09:54:32.441087 | orchestrator | 2025-10-09 09:54:32.441098 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-10-09 09:54:32.441109 | orchestrator | Thursday 09 October 2025 09:54:26 +0000 (0:00:00.090) 0:00:04.365 ****** 2025-10-09 09:54:32.441141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 09:54:32.441153 | orchestrator | 2025-10-09 09:54:32.441165 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-10-09 09:54:32.441176 | orchestrator | Thursday 09 October 2025 09:54:26 +0000 (0:00:00.088) 0:00:04.454 ****** 2025-10-09 09:54:32.441187 | orchestrator | ok: [testbed-manager] 2025-10-09 09:54:32.441197 | orchestrator | 2025-10-09 09:54:32.441211 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-10-09 09:54:32.441224 | orchestrator | Thursday 09 October 2025 09:54:27 +0000 (0:00:01.148) 0:00:05.603 ****** 2025-10-09 09:54:32.441236 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:54:32.441248 | orchestrator | 2025-10-09 09:54:32.441260 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-10-09 09:54:32.441273 | orchestrator | Thursday 09 October 2025 09:54:27 +0000 (0:00:00.068) 0:00:05.671 ****** 2025-10-09 09:54:32.441308 | orchestrator | ok: [testbed-manager] 2025-10-09 09:54:32.441321 | orchestrator | 2025-10-09 09:54:32.441333 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-10-09 09:54:32.441346 | orchestrator | Thursday 09 October 2025 09:54:28 +0000 (0:00:00.529) 0:00:06.200 ****** 2025-10-09 09:54:32.441358 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:54:32.441371 | orchestrator | 2025-10-09 09:54:32.441383 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-10-09 09:54:32.441397 | orchestrator | Thursday 09 October 2025 09:54:28 +0000 (0:00:00.087) 0:00:06.288 ****** 2025-10-09 09:54:32.441410 | orchestrator | changed: [testbed-manager] 2025-10-09 09:54:32.441422 | orchestrator | 2025-10-09 09:54:32.441434 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-10-09 09:54:32.441447 | orchestrator | Thursday 09 October 2025 09:54:28 +0000 (0:00:00.556) 0:00:06.845 ****** 2025-10-09 09:54:32.441459 | orchestrator | changed: [testbed-manager] 2025-10-09 09:54:32.441471 | orchestrator | 2025-10-09 09:54:32.441483 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-10-09 09:54:32.441496 | orchestrator | Thursday 09 October 2025 09:54:29 +0000 (0:00:01.105) 0:00:07.950 ****** 2025-10-09 09:54:32.441508 | orchestrator | ok: [testbed-manager] 2025-10-09 09:54:32.441521 | orchestrator | 2025-10-09 09:54:32.441533 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-10-09 09:54:32.441546 | orchestrator | Thursday 09 October 2025 09:54:30 +0000 (0:00:01.040) 0:00:08.991 ****** 2025-10-09 09:54:32.441558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-10-09 09:54:32.441569 | orchestrator | 2025-10-09 09:54:32.441580 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-10-09 09:54:32.441590 | orchestrator | Thursday 09 October 2025 09:54:30 +0000 (0:00:00.092) 0:00:09.084 ****** 2025-10-09 09:54:32.441601 | orchestrator | changed: [testbed-manager] 2025-10-09 09:54:32.441611 | orchestrator | 2025-10-09 09:54:32.441622 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:54:32.441634 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 09:54:32.441645 | orchestrator | 2025-10-09 09:54:32.441656 | orchestrator | 2025-10-09 09:54:32.441666 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:54:32.441678 | orchestrator | Thursday 09 October 2025 09:54:32 +0000 (0:00:01.191) 0:00:10.275 ****** 2025-10-09 09:54:32.441689 | orchestrator | =============================================================================== 2025-10-09 09:54:32.441700 | orchestrator | Gathering Facts --------------------------------------------------------- 4.04s 2025-10-09 09:54:32.441710 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2025-10-09 09:54:32.441721 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.15s 2025-10-09 09:54:32.441732 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2025-10-09 09:54:32.441742 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.04s 2025-10-09 09:54:32.441771 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2025-10-09 09:54:32.441805 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2025-10-09 09:54:32.441816 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-10-09 09:54:32.441827 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-10-09 09:54:32.441837 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-10-09 09:54:32.441848 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-10-09 09:54:32.441858 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-10-09 09:54:32.441877 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-10-09 09:54:32.777567 | orchestrator | + osism apply sshconfig 2025-10-09 09:54:44.881657 | orchestrator | 2025-10-09 09:54:44 | INFO  | Task 92412e76-fe28-4cf1-a60c-422d78568860 (sshconfig) was prepared for execution. 2025-10-09 09:54:44.881798 | orchestrator | 2025-10-09 09:54:44 | INFO  | It takes a moment until task 92412e76-fe28-4cf1-a60c-422d78568860 (sshconfig) has been started and output is visible here. 2025-10-09 09:54:57.166254 | orchestrator | 2025-10-09 09:54:57.166345 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-10-09 09:54:57.166354 | orchestrator | 2025-10-09 09:54:57.166377 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-10-09 09:54:57.166384 | orchestrator | Thursday 09 October 2025 09:54:48 +0000 (0:00:00.171) 0:00:00.171 ****** 2025-10-09 09:54:57.166391 | orchestrator | ok: [testbed-manager] 2025-10-09 09:54:57.166399 | orchestrator | 2025-10-09 09:54:57.166405 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-10-09 09:54:57.166412 | orchestrator | Thursday 09 October 2025 09:54:49 +0000 (0:00:00.585) 0:00:00.757 ****** 2025-10-09 09:54:57.166418 | orchestrator | changed: [testbed-manager] 2025-10-09 09:54:57.166426 | orchestrator | 2025-10-09 09:54:57.166432 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-10-09 09:54:57.166439 | orchestrator | Thursday 09 October 2025 09:54:50 +0000 (0:00:00.549) 0:00:01.306 ****** 2025-10-09 09:54:57.166445 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-10-09 09:54:57.166452 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-10-09 09:54:57.166459 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-10-09 09:54:57.166465 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-10-09 09:54:57.166471 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-10-09 09:54:57.166478 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-10-09 09:54:57.166484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-10-09 09:54:57.166490 | orchestrator | 2025-10-09 09:54:57.166496 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-10-09 09:54:57.166503 | orchestrator | Thursday 09 October 2025 09:54:56 +0000 (0:00:06.170) 0:00:07.476 ****** 2025-10-09 09:54:57.166509 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:54:57.166515 | orchestrator | 2025-10-09 09:54:57.166522 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-10-09 09:54:57.166528 | orchestrator | Thursday 09 October 2025 09:54:56 +0000 (0:00:00.066) 0:00:07.543 ****** 2025-10-09 09:54:57.166534 | orchestrator | changed: [testbed-manager] 2025-10-09 09:54:57.166541 | orchestrator | 2025-10-09 09:54:57.166547 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:54:57.166554 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 09:54:57.166561 | orchestrator | 2025-10-09 09:54:57.166567 | orchestrator | 2025-10-09 09:54:57.166574 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:54:57.166580 | orchestrator | Thursday 09 October 2025 09:54:56 +0000 (0:00:00.631) 0:00:08.175 ****** 2025-10-09 09:54:57.166587 | orchestrator | =============================================================================== 2025-10-09 09:54:57.166593 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.17s 2025-10-09 09:54:57.166599 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2025-10-09 09:54:57.166605 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-10-09 09:54:57.166612 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2025-10-09 09:54:57.166618 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-10-09 09:54:57.513424 | orchestrator | + osism apply known-hosts 2025-10-09 09:55:09.744635 | orchestrator | 2025-10-09 09:55:09 | INFO  | Task 6ee63f78-72d6-4f9a-81e1-8e12a5b9f84d (known-hosts) was prepared for execution. 2025-10-09 09:55:09.744772 | orchestrator | 2025-10-09 09:55:09 | INFO  | It takes a moment until task 6ee63f78-72d6-4f9a-81e1-8e12a5b9f84d (known-hosts) has been started and output is visible here. 2025-10-09 09:55:27.320491 | orchestrator | 2025-10-09 09:55:27.320612 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-10-09 09:55:27.320629 | orchestrator | 2025-10-09 09:55:27.320641 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-10-09 09:55:27.320653 | orchestrator | Thursday 09 October 2025 09:55:13 +0000 (0:00:00.183) 0:00:00.183 ****** 2025-10-09 09:55:27.320665 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-10-09 09:55:27.320677 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-10-09 09:55:27.320688 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-10-09 09:55:27.320700 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-10-09 09:55:27.320751 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-10-09 09:55:27.320764 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-10-09 09:55:27.320775 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-10-09 09:55:27.320786 | orchestrator | 2025-10-09 09:55:27.320797 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-10-09 09:55:27.320810 | orchestrator | Thursday 09 October 2025 09:55:20 +0000 (0:00:06.119) 0:00:06.303 ****** 2025-10-09 09:55:27.320823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-10-09 09:55:27.320836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-10-09 09:55:27.320848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-10-09 09:55:27.320859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-10-09 09:55:27.320870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-10-09 09:55:27.320892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-10-09 09:55:27.320904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-10-09 09:55:27.320915 | orchestrator | 2025-10-09 09:55:27.320927 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:27.320938 | orchestrator | Thursday 09 October 2025 09:55:20 +0000 (0:00:00.163) 0:00:06.466 ****** 2025-10-09 09:55:27.320950 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGRK1thxDc6JXoGSbSqz81o79XyPzA3IuA5fQeyhK2HHF71eHcC/FGIGKaUJQmyFb3nc1xsm0KOXXClIiNOg6cA=) 2025-10-09 09:55:27.320971 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0cqNwfWWQ8Cyxe8QiHJOwiYAk4/4j6pgTQVYp3I6+xFa13OZNPySq2Hhsl3W8ZHVyhpwkjG/opWOz2OEl8iQ0z8cNQqLvRKnYAfUitc2UYK6Rdi4itzqENeqXyhnDvfesAWWbdiykIQ53boQ+kj++zldQoXqsruvgsQY0VQn7KNRx7QGJCCJgT07yP8SgOa0cTYfDqIy6Uca0I9oCCd2OAB2izAin60qVt1AX5s7jS8QeJTGsvTN1DskHLuOefmgSbk9ZGbzLi2zfccc4/eXOvLUFZeb2dKRChfyE6BfStaFBHbXuPM/ngOGF04xWn7tlK2rBLPR5LzMhEpyKhBCq37O3S9AaLmSRTNYeHmDkjTMgQh/7D8xQ1mcHzji4k7Ycgv3Uotxf/bNRvVOmSJ8M6mW9LLxkQwXZ0PDFKhg3OV6IDKFSn72pQidJoqESrTkEbRV32oD2Zl8yO+/mNVgU7RsnGd3+ZSvouhHGOw7UAjHREQJi9D3FQisU3rbVSHE=) 2025-10-09 09:55:27.321011 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFrrajkGJXvk2M+Osq+drx8IhLE3nLABKPuDIu2C3lyH) 2025-10-09 09:55:27.321025 | orchestrator | 2025-10-09 09:55:27.321039 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:27.321051 | orchestrator | Thursday 09 October 2025 09:55:21 +0000 (0:00:01.327) 0:00:07.794 ****** 2025-10-09 09:55:27.321083 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyW2DSVZc4uCaR5jv4dz604oaa2sGwMnztO0cy5YKXAwwC9HIc0OsjSF0RskVXh4vI/0u1hOtVlxoVZVucufEBt6MFPdqnn8yWT4TaBpSih9u2pR4VeANhWQAAbEVzEgGf2GG5s3CAnOoVQ66cKGJOB89o1vEoqzOsCwPNamv3wfMPufpHkgVQBAdRqAFdLIXwqY8j2lxIiRQHcgeFYDVETUXSmufCPB+OZE1Z7hzEHO20EN0ewDGRbFpy2fx3nF+rq0rwoR+5nvKCmAbvd4GKDeWvuAFJAKipjxE69QLnBAsj/Vu0Wln1mVjrd/9OK9Vlz11svSOLAUt6GdlQZKHqMJ7GXFlc28KFHYrYwMJOdsBFhwI6T1j77q433Aojrm7McMhheZBHWtNNNPWK6X3f1ph1ZlMN1FdMzLoHH0F249Mwu6iK3dX8IIgM/WtFiXqwrgtrxNlYMjeKU7O59KFzyljsxlYfImQcZxUSlzg96HMLfdO5FhbRMAoo0bn0nEc=) 2025-10-09 09:55:27.321097 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNR+mpUMwvKri9+FqgDnFrTTo02T/dyg9XFetB6h4ceNvMgeIaKtuXhZt+mDxSD4FPKa109KB/es9ec8uI6Rjjc=) 2025-10-09 09:55:27.321111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFEHcohM85RTjFqzcsZnL3nkyqO2ifgbH0iryBD3hd3N) 2025-10-09 09:55:27.321123 | orchestrator | 2025-10-09 09:55:27.321136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:27.321149 | orchestrator | Thursday 09 October 2025 09:55:22 +0000 (0:00:01.140) 0:00:08.935 ****** 2025-10-09 09:55:27.321161 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHY/Sj0HYP6l+assdtq7bDXUyXfiHFNG0ZaM4Rc0RPb) 2025-10-09 09:55:27.321175 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSqBrXnBJfa+ogoVqxfykphVSknXZqtEfvoZ0XxGVFZ1oPcqz1wbXV5oUT491UfpYnCRCBAmXsVeF11SMd081Pd2SgpyYOOKMoukVFGz/Int/volW7Idzs2wEZIXo90ZADDTsmGfR114lVRFe8jROmYSNOYxFNNmBYfUewoVipBJKhUPGOLozBA4AX/lpKu53HLRe1tts+kMmagvBXUMnBtkKW+jdIzn8la9QpGa0SraUfk9K7BXNjboH4v3cRmdsu9btACufTsRgBl3QcR1dGe/hNkygTaiZb0ExQYTDcXaRh3h9kulqyYiqi/vQRG84WSEQNzbvfhv+Am3qYs9/5uRcLSl3/ktTYgfysnikpNSybd/zyuOIP0dQliYVJr47kK6mGcvQpp0P/276PZZMAsWFD+rd/1dVvxVWpJLIdweCo1A711NH8yijU1dp8YbZLr3Qr5Q3GMOrJ5rcmW9jU6xUiO/BZLR4/JbPknMiTz/9mMHhkhdbSATWpW+TuYVc=) 2025-10-09 09:55:27.321187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGvXtQ/sNJP0ksZtPTPoD1QYdP/eSPGll4Sv5CRAtZ6k3JUn3/IBuz+Bg1IuDk36WD0J8urNmFR7bYJl7OcKMAw=) 2025-10-09 09:55:27.321200 | orchestrator | 2025-10-09 09:55:27.321213 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:27.321226 | orchestrator | Thursday 09 October 2025 09:55:23 +0000 (0:00:01.225) 0:00:10.160 ****** 2025-10-09 09:55:27.321239 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0vo90m+izIBloLxjguKOiFAxHkPBgbXuMWPSG5xFI2lj824y7DYhO+HNk0tIJwPAqsYpmu8/emV+QdyYVpR1fSCxF7VKWvJocMX0KuWv4mT05IHsWtSmr9qXnIJwFcecQBiACTkl2lF7HtuntKUPyzuYrrnXRgcG6t1N6b7WMgBDSsXOlyI8WZ0jCFfqPVK1HWVdmClxxHlA1HcTKaRLo/llaZQJT3thN887ay+3p3GxBNrrXncIgYi36/9nZ5AyuWaTBJg+9ASF3Yh4srbwTvQyV5ECImr3E8YONq4q3i+kh73Cn1RbNQdyRe74I0SwLFCczK2tqcD6VBXUbR/+8pZDd88IcEscnBPerRiUx6NDfH6vwZuhjgRWiTE8xUXgqlKMc1S00M7nrff1zBHuiJhEk56SeCHLeu1+bM3hVBLyKKY0H4liKa3CNToKhxouO6H2EDsKcQ4v4dBWTvUhG6huUwdH80Rs6GXHTCYiKLhvXqxzlpvFXGfysas4lxCM=) 2025-10-09 09:55:27.321259 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNqBoZYKItH521uZ8NwtS9HMQEdyXOYmb+wsqCutjTwQonm14T1fFLdB9AJBil+/v2JLVKBQS2x1xkqtJJ6CXBk=) 2025-10-09 09:55:27.321273 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKlFqBtvSgF4JtfSYyWahGfn2y26bt9GoJ27v8wu/hzl) 2025-10-09 09:55:27.321285 | orchestrator | 2025-10-09 09:55:27.321297 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:27.321308 | orchestrator | Thursday 09 October 2025 09:55:25 +0000 (0:00:01.154) 0:00:11.315 ****** 2025-10-09 09:55:27.321393 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCWq+UeyUBCRDvWucho8z6mby2pxK6Xrty6nv622XR2QvWKcDqxh242WuhxByUau1uB0urve0dVGXAUR98fPwJ0=) 2025-10-09 09:55:27.321405 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFNntsLs4+tdgZSTPNujBUC2gb3SCoCDdOcYBy6RVVeJ) 2025-10-09 09:55:27.321417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtFPThHGQyWfoGVBMqhypOdDcTPC7AipKjum/0UNA0qugX3ZnEx0ivsMrFcB0dKcG+Am6p3Fcs9kDsjrd4f8yIfmDl7X4IsVCYy+Ux0xJ2qP8/IpWUUpTD3YlJ2IuurMHbnnyaNlOsJ0U9fQGcdXVq/h8W5QJZfycEfUHTFhBRC+5Wzs2SR9RJqy+M5yI5ODanus4hcUmn01PRc3S/lMgyC6uaUtbYcs1spuEIV49JSra55XB6Q+WhI8RPhF0M1KCzGeErF9TOCNViNipvyflBZZeK9D49O17TXq9G5iq3QnBNiJta63XwCm1iFi9CcaYdeWQ9xIivTbHaK+2N2fKG/JmaIxnnoKllFt5BxXB20jznOStD19vj50GEIz5Gr4xsi9EDrA3ZbzTlC8ZVnAcdax3UVddt0AemB7fEn34FCCYewbQ+rF7iU0+QDI3SCYburX/HIptlOLPEvwDwJJMRR6GuHR4I7dtwrJTlaKauz0IfvHQhxPvCacNuuh2cVYU=) 2025-10-09 09:55:27.321428 | orchestrator | 2025-10-09 09:55:27.321439 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:27.321450 | orchestrator | Thursday 09 October 2025 09:55:26 +0000 (0:00:01.122) 0:00:12.437 ****** 2025-10-09 09:55:27.321468 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHexjMT+SvKC4hJv1ZHU5/O2Q8zDFCeaihPDz314UyS4WCGe1NWhnZgqVCfs3FYdg6ibwDADVL85Q5dAerbe8rA=) 2025-10-09 09:55:38.785452 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxDe38L03w6xuRSqGQ5cnxRjF2F/UrAaRYHZpG215+g0sDm/tvE9MO/n+ErY50kAWY3N7bJxyUafNETVz4Dp9ds91Nswf7Gd2yGZFNLSWfeC82XFutPw5um7CJ4sJTshJZvE9k7FB8fvIN57kLcBnENgpx5SMOg8NeMsNdJeOB02USvIbqkrOnx/TcIrXZFR/ilCwLvhniXNjwHQJN8/ZpZCtU8DAqZwHkKtYU+YOvuLWMFUidTLu+AOoVmhPSl5DYMgSUYGys2pl861szukFvWOrHgEjWuJ9LnNS7o+gxZjICgDb5Z3OU+KEduB3HDmfG6j5CwJfiiXL30ooPiT3+0kIR11lEnjmNjA+L+O9UeqR66QkbA7N7vQ7A1Z/8fwBpjRijix6nUCQTVbHoaqvWdaQZlwiG+b3u6Wyo+/pbXe8ijoomkpvqPBhmqgWapTDsRNnuMpvwWlYRKLZ7DM4yPY/rtaoYxkD80fuO41daShf+JDp7F0i/AdzfztV73jU=) 2025-10-09 09:55:38.785570 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJHW7FmvzBeIAsSic3U0IfTfPEBqoOK4RfUYjlkMDZpk) 2025-10-09 09:55:38.785589 | orchestrator | 2025-10-09 09:55:38.785603 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:38.785615 | orchestrator | Thursday 09 October 2025 09:55:27 +0000 (0:00:01.150) 0:00:13.588 ****** 2025-10-09 09:55:38.785627 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiqvwyXIBj57WRuSsQW0OC2NqKhFuFiS+BrooWUR27u41NLzxZVRTYtaprmy4SlMGXRp2CviC2UhtDidmP8wI41DiNU+OZCmV6fsn2zeeI8jcm39+FatANq1N3VABBy8aDwv5avjm0/y6+U+kelZHRDA8RrDix7is3fWiq9Lsy3lEgG2jJuH5FWVNvch2vBKZ/DJaQSPe1RLVlpgNSchmjwmlmEcG+XLxYep1AMs65M45aCWSEe4y8wk5HSkbioMr53gvNOQ97zIs6OTAtSi6dvM0gTr09yN+44DtHrDP2opzDfRfjNczTddlnwAIGAvFhFgG9f1ofE7/EofB+5n0G+OF0O880QMHYE2oBKssmL8Y3MHakAqYRv6Hp6Dt5l0EQeGMA3kAZwneU9dFKmbL2G8wxvbz9In4K7sP+yik9sJ/5e/eKMjCQHCl4/cBbG4Rn8Kl46w2PmP9FfQds1iOILcrhOtC1WoCprl/XgyGt59RmxZPJsfNaSzZ9ephyZkE=) 2025-10-09 09:55:38.785639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFe4YqQsflzDSMRJ80+v3RFopFYmTFMevH/baTViH7CnpL8/CHuHFUIvDNAcluYFJ9O0YGcO74Tq51v4y3n4+uE=) 2025-10-09 09:55:38.785677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0rdJm8hPyCPFrYFDCT8Gtl45wvwHtQhF252vJSJvPb) 2025-10-09 09:55:38.785689 | orchestrator | 2025-10-09 09:55:38.785700 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-10-09 09:55:38.785769 | orchestrator | Thursday 09 October 2025 09:55:28 +0000 (0:00:01.177) 0:00:14.765 ****** 2025-10-09 09:55:38.785781 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-10-09 09:55:38.785792 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-10-09 09:55:38.785803 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-10-09 09:55:38.785814 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-10-09 09:55:38.785824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-10-09 09:55:38.785835 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-10-09 09:55:38.785846 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-10-09 09:55:38.785857 | orchestrator | 2025-10-09 09:55:38.785868 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-10-09 09:55:38.785880 | orchestrator | Thursday 09 October 2025 09:55:33 +0000 (0:00:05.484) 0:00:20.250 ****** 2025-10-09 09:55:38.785892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-10-09 09:55:38.785905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-10-09 09:55:38.785916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-10-09 09:55:38.785927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-10-09 09:55:38.785938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-10-09 09:55:38.785949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-10-09 09:55:38.785960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-10-09 09:55:38.785971 | orchestrator | 2025-10-09 09:55:38.786000 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:38.786014 | orchestrator | Thursday 09 October 2025 09:55:34 +0000 (0:00:00.197) 0:00:20.447 ****** 2025-10-09 09:55:38.786079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGRK1thxDc6JXoGSbSqz81o79XyPzA3IuA5fQeyhK2HHF71eHcC/FGIGKaUJQmyFb3nc1xsm0KOXXClIiNOg6cA=) 2025-10-09 09:55:38.786114 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0cqNwfWWQ8Cyxe8QiHJOwiYAk4/4j6pgTQVYp3I6+xFa13OZNPySq2Hhsl3W8ZHVyhpwkjG/opWOz2OEl8iQ0z8cNQqLvRKnYAfUitc2UYK6Rdi4itzqENeqXyhnDvfesAWWbdiykIQ53boQ+kj++zldQoXqsruvgsQY0VQn7KNRx7QGJCCJgT07yP8SgOa0cTYfDqIy6Uca0I9oCCd2OAB2izAin60qVt1AX5s7jS8QeJTGsvTN1DskHLuOefmgSbk9ZGbzLi2zfccc4/eXOvLUFZeb2dKRChfyE6BfStaFBHbXuPM/ngOGF04xWn7tlK2rBLPR5LzMhEpyKhBCq37O3S9AaLmSRTNYeHmDkjTMgQh/7D8xQ1mcHzji4k7Ycgv3Uotxf/bNRvVOmSJ8M6mW9LLxkQwXZ0PDFKhg3OV6IDKFSn72pQidJoqESrTkEbRV32oD2Zl8yO+/mNVgU7RsnGd3+ZSvouhHGOw7UAjHREQJi9D3FQisU3rbVSHE=) 2025-10-09 09:55:38.786137 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFrrajkGJXvk2M+Osq+drx8IhLE3nLABKPuDIu2C3lyH) 2025-10-09 09:55:38.786150 | orchestrator | 2025-10-09 09:55:38.786163 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:38.786176 | orchestrator | Thursday 09 October 2025 09:55:35 +0000 (0:00:01.161) 0:00:21.609 ****** 2025-10-09 09:55:38.786188 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFEHcohM85RTjFqzcsZnL3nkyqO2ifgbH0iryBD3hd3N) 2025-10-09 09:55:38.786201 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyW2DSVZc4uCaR5jv4dz604oaa2sGwMnztO0cy5YKXAwwC9HIc0OsjSF0RskVXh4vI/0u1hOtVlxoVZVucufEBt6MFPdqnn8yWT4TaBpSih9u2pR4VeANhWQAAbEVzEgGf2GG5s3CAnOoVQ66cKGJOB89o1vEoqzOsCwPNamv3wfMPufpHkgVQBAdRqAFdLIXwqY8j2lxIiRQHcgeFYDVETUXSmufCPB+OZE1Z7hzEHO20EN0ewDGRbFpy2fx3nF+rq0rwoR+5nvKCmAbvd4GKDeWvuAFJAKipjxE69QLnBAsj/Vu0Wln1mVjrd/9OK9Vlz11svSOLAUt6GdlQZKHqMJ7GXFlc28KFHYrYwMJOdsBFhwI6T1j77q433Aojrm7McMhheZBHWtNNNPWK6X3f1ph1ZlMN1FdMzLoHH0F249Mwu6iK3dX8IIgM/WtFiXqwrgtrxNlYMjeKU7O59KFzyljsxlYfImQcZxUSlzg96HMLfdO5FhbRMAoo0bn0nEc=) 2025-10-09 09:55:38.786215 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNR+mpUMwvKri9+FqgDnFrTTo02T/dyg9XFetB6h4ceNvMgeIaKtuXhZt+mDxSD4FPKa109KB/es9ec8uI6Rjjc=) 2025-10-09 09:55:38.786227 | orchestrator | 2025-10-09 09:55:38.786239 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:38.786251 | orchestrator | Thursday 09 October 2025 09:55:36 +0000 (0:00:01.135) 0:00:22.745 ****** 2025-10-09 09:55:38.786263 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSqBrXnBJfa+ogoVqxfykphVSknXZqtEfvoZ0XxGVFZ1oPcqz1wbXV5oUT491UfpYnCRCBAmXsVeF11SMd081Pd2SgpyYOOKMoukVFGz/Int/volW7Idzs2wEZIXo90ZADDTsmGfR114lVRFe8jROmYSNOYxFNNmBYfUewoVipBJKhUPGOLozBA4AX/lpKu53HLRe1tts+kMmagvBXUMnBtkKW+jdIzn8la9QpGa0SraUfk9K7BXNjboH4v3cRmdsu9btACufTsRgBl3QcR1dGe/hNkygTaiZb0ExQYTDcXaRh3h9kulqyYiqi/vQRG84WSEQNzbvfhv+Am3qYs9/5uRcLSl3/ktTYgfysnikpNSybd/zyuOIP0dQliYVJr47kK6mGcvQpp0P/276PZZMAsWFD+rd/1dVvxVWpJLIdweCo1A711NH8yijU1dp8YbZLr3Qr5Q3GMOrJ5rcmW9jU6xUiO/BZLR4/JbPknMiTz/9mMHhkhdbSATWpW+TuYVc=) 2025-10-09 09:55:38.786276 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGvXtQ/sNJP0ksZtPTPoD1QYdP/eSPGll4Sv5CRAtZ6k3JUn3/IBuz+Bg1IuDk36WD0J8urNmFR7bYJl7OcKMAw=) 2025-10-09 09:55:38.786288 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHY/Sj0HYP6l+assdtq7bDXUyXfiHFNG0ZaM4Rc0RPb) 2025-10-09 09:55:38.786300 | orchestrator | 2025-10-09 09:55:38.786313 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:38.786325 | orchestrator | Thursday 09 October 2025 09:55:37 +0000 (0:00:01.157) 0:00:23.902 ****** 2025-10-09 09:55:38.786347 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0vo90m+izIBloLxjguKOiFAxHkPBgbXuMWPSG5xFI2lj824y7DYhO+HNk0tIJwPAqsYpmu8/emV+QdyYVpR1fSCxF7VKWvJocMX0KuWv4mT05IHsWtSmr9qXnIJwFcecQBiACTkl2lF7HtuntKUPyzuYrrnXRgcG6t1N6b7WMgBDSsXOlyI8WZ0jCFfqPVK1HWVdmClxxHlA1HcTKaRLo/llaZQJT3thN887ay+3p3GxBNrrXncIgYi36/9nZ5AyuWaTBJg+9ASF3Yh4srbwTvQyV5ECImr3E8YONq4q3i+kh73Cn1RbNQdyRe74I0SwLFCczK2tqcD6VBXUbR/+8pZDd88IcEscnBPerRiUx6NDfH6vwZuhjgRWiTE8xUXgqlKMc1S00M7nrff1zBHuiJhEk56SeCHLeu1+bM3hVBLyKKY0H4liKa3CNToKhxouO6H2EDsKcQ4v4dBWTvUhG6huUwdH80Rs6GXHTCYiKLhvXqxzlpvFXGfysas4lxCM=) 2025-10-09 09:55:43.590427 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNqBoZYKItH521uZ8NwtS9HMQEdyXOYmb+wsqCutjTwQonm14T1fFLdB9AJBil+/v2JLVKBQS2x1xkqtJJ6CXBk=) 2025-10-09 09:55:43.590527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKlFqBtvSgF4JtfSYyWahGfn2y26bt9GoJ27v8wu/hzl) 2025-10-09 09:55:43.590570 | orchestrator | 2025-10-09 09:55:43.590585 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:43.590598 | orchestrator | Thursday 09 October 2025 09:55:38 +0000 (0:00:01.154) 0:00:25.056 ****** 2025-10-09 09:55:43.590611 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtFPThHGQyWfoGVBMqhypOdDcTPC7AipKjum/0UNA0qugX3ZnEx0ivsMrFcB0dKcG+Am6p3Fcs9kDsjrd4f8yIfmDl7X4IsVCYy+Ux0xJ2qP8/IpWUUpTD3YlJ2IuurMHbnnyaNlOsJ0U9fQGcdXVq/h8W5QJZfycEfUHTFhBRC+5Wzs2SR9RJqy+M5yI5ODanus4hcUmn01PRc3S/lMgyC6uaUtbYcs1spuEIV49JSra55XB6Q+WhI8RPhF0M1KCzGeErF9TOCNViNipvyflBZZeK9D49O17TXq9G5iq3QnBNiJta63XwCm1iFi9CcaYdeWQ9xIivTbHaK+2N2fKG/JmaIxnnoKllFt5BxXB20jznOStD19vj50GEIz5Gr4xsi9EDrA3ZbzTlC8ZVnAcdax3UVddt0AemB7fEn34FCCYewbQ+rF7iU0+QDI3SCYburX/HIptlOLPEvwDwJJMRR6GuHR4I7dtwrJTlaKauz0IfvHQhxPvCacNuuh2cVYU=) 2025-10-09 09:55:43.590626 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCWq+UeyUBCRDvWucho8z6mby2pxK6Xrty6nv622XR2QvWKcDqxh242WuhxByUau1uB0urve0dVGXAUR98fPwJ0=) 2025-10-09 09:55:43.590638 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFNntsLs4+tdgZSTPNujBUC2gb3SCoCDdOcYBy6RVVeJ) 2025-10-09 09:55:43.590650 | orchestrator | 2025-10-09 09:55:43.590662 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:43.590674 | orchestrator | Thursday 09 October 2025 09:55:39 +0000 (0:00:01.198) 0:00:26.254 ****** 2025-10-09 09:55:43.590686 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHexjMT+SvKC4hJv1ZHU5/O2Q8zDFCeaihPDz314UyS4WCGe1NWhnZgqVCfs3FYdg6ibwDADVL85Q5dAerbe8rA=) 2025-10-09 09:55:43.590758 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxDe38L03w6xuRSqGQ5cnxRjF2F/UrAaRYHZpG215+g0sDm/tvE9MO/n+ErY50kAWY3N7bJxyUafNETVz4Dp9ds91Nswf7Gd2yGZFNLSWfeC82XFutPw5um7CJ4sJTshJZvE9k7FB8fvIN57kLcBnENgpx5SMOg8NeMsNdJeOB02USvIbqkrOnx/TcIrXZFR/ilCwLvhniXNjwHQJN8/ZpZCtU8DAqZwHkKtYU+YOvuLWMFUidTLu+AOoVmhPSl5DYMgSUYGys2pl861szukFvWOrHgEjWuJ9LnNS7o+gxZjICgDb5Z3OU+KEduB3HDmfG6j5CwJfiiXL30ooPiT3+0kIR11lEnjmNjA+L+O9UeqR66QkbA7N7vQ7A1Z/8fwBpjRijix6nUCQTVbHoaqvWdaQZlwiG+b3u6Wyo+/pbXe8ijoomkpvqPBhmqgWapTDsRNnuMpvwWlYRKLZ7DM4yPY/rtaoYxkD80fuO41daShf+JDp7F0i/AdzfztV73jU=) 2025-10-09 09:55:43.590773 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJHW7FmvzBeIAsSic3U0IfTfPEBqoOK4RfUYjlkMDZpk) 2025-10-09 09:55:43.590784 | orchestrator | 2025-10-09 09:55:43.590795 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:55:43.590806 | orchestrator | Thursday 09 October 2025 09:55:41 +0000 (0:00:01.111) 0:00:27.366 ****** 2025-10-09 09:55:43.590834 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiqvwyXIBj57WRuSsQW0OC2NqKhFuFiS+BrooWUR27u41NLzxZVRTYtaprmy4SlMGXRp2CviC2UhtDidmP8wI41DiNU+OZCmV6fsn2zeeI8jcm39+FatANq1N3VABBy8aDwv5avjm0/y6+U+kelZHRDA8RrDix7is3fWiq9Lsy3lEgG2jJuH5FWVNvch2vBKZ/DJaQSPe1RLVlpgNSchmjwmlmEcG+XLxYep1AMs65M45aCWSEe4y8wk5HSkbioMr53gvNOQ97zIs6OTAtSi6dvM0gTr09yN+44DtHrDP2opzDfRfjNczTddlnwAIGAvFhFgG9f1ofE7/EofB+5n0G+OF0O880QMHYE2oBKssmL8Y3MHakAqYRv6Hp6Dt5l0EQeGMA3kAZwneU9dFKmbL2G8wxvbz9In4K7sP+yik9sJ/5e/eKMjCQHCl4/cBbG4Rn8Kl46w2PmP9FfQds1iOILcrhOtC1WoCprl/XgyGt59RmxZPJsfNaSzZ9ephyZkE=) 2025-10-09 09:55:43.590847 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFe4YqQsflzDSMRJ80+v3RFopFYmTFMevH/baTViH7CnpL8/CHuHFUIvDNAcluYFJ9O0YGcO74Tq51v4y3n4+uE=) 2025-10-09 09:55:43.590858 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0rdJm8hPyCPFrYFDCT8Gtl45wvwHtQhF252vJSJvPb) 2025-10-09 09:55:43.590869 | orchestrator | 2025-10-09 09:55:43.590880 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-10-09 09:55:43.590899 | orchestrator | Thursday 09 October 2025 09:55:42 +0000 (0:00:01.143) 0:00:28.509 ****** 2025-10-09 09:55:43.590910 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-10-09 09:55:43.590921 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-10-09 09:55:43.590948 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-10-09 09:55:43.590959 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-10-09 09:55:43.590971 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-09 09:55:43.590985 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-10-09 09:55:43.590997 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-10-09 09:55:43.591009 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:55:43.591022 | orchestrator | 2025-10-09 09:55:43.591034 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-10-09 09:55:43.591047 | orchestrator | Thursday 09 October 2025 09:55:42 +0000 (0:00:00.177) 0:00:28.686 ****** 2025-10-09 09:55:43.591058 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:55:43.591070 | orchestrator | 2025-10-09 09:55:43.591082 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-10-09 09:55:43.591099 | orchestrator | Thursday 09 October 2025 09:55:42 +0000 (0:00:00.094) 0:00:28.781 ****** 2025-10-09 09:55:43.591112 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:55:43.591124 | orchestrator | 2025-10-09 09:55:43.591136 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-10-09 09:55:43.591149 | orchestrator | Thursday 09 October 2025 09:55:42 +0000 (0:00:00.068) 0:00:28.849 ****** 2025-10-09 09:55:43.591161 | orchestrator | changed: [testbed-manager] 2025-10-09 09:55:43.591173 | orchestrator | 2025-10-09 09:55:43.591185 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:55:43.591198 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 09:55:43.591211 | orchestrator | 2025-10-09 09:55:43.591223 | orchestrator | 2025-10-09 09:55:43.591235 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:55:43.591247 | orchestrator | Thursday 09 October 2025 09:55:43 +0000 (0:00:00.745) 0:00:29.595 ****** 2025-10-09 09:55:43.591260 | orchestrator | =============================================================================== 2025-10-09 09:55:43.591271 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.12s 2025-10-09 09:55:43.591283 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.48s 2025-10-09 09:55:43.591296 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.33s 2025-10-09 09:55:43.591308 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-10-09 09:55:43.591320 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-10-09 09:55:43.591331 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-10-09 09:55:43.591342 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-10-09 09:55:43.591352 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-10-09 09:55:43.591363 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-10-09 09:55:43.591373 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-10-09 09:55:43.591384 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-10-09 09:55:43.591395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-10-09 09:55:43.591405 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-10-09 09:55:43.591416 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-10-09 09:55:43.591433 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-10-09 09:55:43.591444 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-10-09 09:55:43.591454 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2025-10-09 09:55:43.591465 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2025-10-09 09:55:43.591476 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-10-09 09:55:43.591487 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-10-09 09:55:43.944848 | orchestrator | + osism apply squid 2025-10-09 09:55:56.094292 | orchestrator | 2025-10-09 09:55:56 | INFO  | Task 64a279e3-e9bd-4732-ac4e-c8353d7bf0fe (squid) was prepared for execution. 2025-10-09 09:55:56.094399 | orchestrator | 2025-10-09 09:55:56 | INFO  | It takes a moment until task 64a279e3-e9bd-4732-ac4e-c8353d7bf0fe (squid) has been started and output is visible here. 2025-10-09 09:57:52.862998 | orchestrator | 2025-10-09 09:57:52.863115 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-10-09 09:57:52.863132 | orchestrator | 2025-10-09 09:57:52.863145 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-10-09 09:57:52.863157 | orchestrator | Thursday 09 October 2025 09:56:00 +0000 (0:00:00.187) 0:00:00.187 ****** 2025-10-09 09:57:52.863168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 09:57:52.863181 | orchestrator | 2025-10-09 09:57:52.863192 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-10-09 09:57:52.863203 | orchestrator | Thursday 09 October 2025 09:56:00 +0000 (0:00:00.095) 0:00:00.283 ****** 2025-10-09 09:57:52.863214 | orchestrator | ok: [testbed-manager] 2025-10-09 09:57:52.863227 | orchestrator | 2025-10-09 09:57:52.863238 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-10-09 09:57:52.863249 | orchestrator | Thursday 09 October 2025 09:56:02 +0000 (0:00:01.625) 0:00:01.909 ****** 2025-10-09 09:57:52.863260 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-10-09 09:57:52.863271 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-10-09 09:57:52.863282 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-10-09 09:57:52.863294 | orchestrator | 2025-10-09 09:57:52.863305 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-10-09 09:57:52.863316 | orchestrator | Thursday 09 October 2025 09:56:03 +0000 (0:00:01.198) 0:00:03.108 ****** 2025-10-09 09:57:52.863327 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-10-09 09:57:52.863338 | orchestrator | 2025-10-09 09:57:52.863349 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-10-09 09:57:52.863360 | orchestrator | Thursday 09 October 2025 09:56:04 +0000 (0:00:01.160) 0:00:04.268 ****** 2025-10-09 09:57:52.863371 | orchestrator | ok: [testbed-manager] 2025-10-09 09:57:52.863382 | orchestrator | 2025-10-09 09:57:52.863392 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-10-09 09:57:52.863403 | orchestrator | Thursday 09 October 2025 09:56:04 +0000 (0:00:00.385) 0:00:04.654 ****** 2025-10-09 09:57:52.863414 | orchestrator | changed: [testbed-manager] 2025-10-09 09:57:52.863425 | orchestrator | 2025-10-09 09:57:52.863436 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-10-09 09:57:52.863447 | orchestrator | Thursday 09 October 2025 09:56:05 +0000 (0:00:01.064) 0:00:05.719 ****** 2025-10-09 09:57:52.863458 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-10-09 09:57:52.863469 | orchestrator | ok: [testbed-manager] 2025-10-09 09:57:52.863484 | orchestrator | 2025-10-09 09:57:52.863495 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-10-09 09:57:52.863531 | orchestrator | Thursday 09 October 2025 09:56:39 +0000 (0:00:33.645) 0:00:39.365 ****** 2025-10-09 09:57:52.863543 | orchestrator | changed: [testbed-manager] 2025-10-09 09:57:52.863556 | orchestrator | 2025-10-09 09:57:52.863568 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-10-09 09:57:52.863580 | orchestrator | Thursday 09 October 2025 09:56:51 +0000 (0:00:12.160) 0:00:51.525 ****** 2025-10-09 09:57:52.863592 | orchestrator | Pausing for 60 seconds 2025-10-09 09:57:52.863605 | orchestrator | changed: [testbed-manager] 2025-10-09 09:57:52.863617 | orchestrator | 2025-10-09 09:57:52.863667 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-10-09 09:57:52.863681 | orchestrator | Thursday 09 October 2025 09:57:51 +0000 (0:01:00.090) 0:01:51.616 ****** 2025-10-09 09:57:52.863693 | orchestrator | ok: [testbed-manager] 2025-10-09 09:57:52.863705 | orchestrator | 2025-10-09 09:57:52.863718 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-10-09 09:57:52.863730 | orchestrator | Thursday 09 October 2025 09:57:51 +0000 (0:00:00.058) 0:01:51.674 ****** 2025-10-09 09:57:52.863742 | orchestrator | changed: [testbed-manager] 2025-10-09 09:57:52.863755 | orchestrator | 2025-10-09 09:57:52.863767 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:57:52.863780 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 09:57:52.863792 | orchestrator | 2025-10-09 09:57:52.863805 | orchestrator | 2025-10-09 09:57:52.863817 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:57:52.863829 | orchestrator | Thursday 09 October 2025 09:57:52 +0000 (0:00:00.651) 0:01:52.326 ****** 2025-10-09 09:57:52.863842 | orchestrator | =============================================================================== 2025-10-09 09:57:52.863872 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-10-09 09:57:52.863885 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.65s 2025-10-09 09:57:52.863898 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.16s 2025-10-09 09:57:52.863909 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.63s 2025-10-09 09:57:52.863920 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.20s 2025-10-09 09:57:52.863931 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.16s 2025-10-09 09:57:52.863941 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.06s 2025-10-09 09:57:52.863952 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-10-09 09:57:52.863963 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-10-09 09:57:52.863974 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-10-09 09:57:52.863985 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-10-09 09:57:53.223196 | orchestrator | + [[ 9.4.0 != \l\a\t\e\s\t ]] 2025-10-09 09:57:53.223274 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-10-09 09:57:53.228865 | orchestrator | ++ semver 9.4.0 9.0.0 2025-10-09 09:57:53.295747 | orchestrator | + [[ 1 -lt 0 ]] 2025-10-09 09:57:53.296569 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-10-09 09:58:05.481195 | orchestrator | 2025-10-09 09:58:05 | INFO  | Task c8784433-2399-4a35-a878-425297817286 (operator) was prepared for execution. 2025-10-09 09:58:05.481302 | orchestrator | 2025-10-09 09:58:05 | INFO  | It takes a moment until task c8784433-2399-4a35-a878-425297817286 (operator) has been started and output is visible here. 2025-10-09 09:58:21.767721 | orchestrator | 2025-10-09 09:58:21.767837 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-10-09 09:58:21.767854 | orchestrator | 2025-10-09 09:58:21.767866 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:58:21.767902 | orchestrator | Thursday 09 October 2025 09:58:09 +0000 (0:00:00.159) 0:00:00.159 ****** 2025-10-09 09:58:21.767914 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:58:21.767926 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:58:21.767937 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:58:21.767947 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:58:21.767957 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:58:21.767967 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:58:21.767977 | orchestrator | 2025-10-09 09:58:21.767988 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-10-09 09:58:21.767998 | orchestrator | Thursday 09 October 2025 09:58:13 +0000 (0:00:03.260) 0:00:03.420 ****** 2025-10-09 09:58:21.768009 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:58:21.768020 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:58:21.768030 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:58:21.768040 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:58:21.768050 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:58:21.768061 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:58:21.768072 | orchestrator | 2025-10-09 09:58:21.768098 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-10-09 09:58:21.768110 | orchestrator | 2025-10-09 09:58:21.768121 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-10-09 09:58:21.768131 | orchestrator | Thursday 09 October 2025 09:58:13 +0000 (0:00:00.805) 0:00:04.226 ****** 2025-10-09 09:58:21.768141 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:58:21.768151 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:58:21.768162 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:58:21.768171 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:58:21.768181 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:58:21.768192 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:58:21.768203 | orchestrator | 2025-10-09 09:58:21.768213 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-10-09 09:58:21.768223 | orchestrator | Thursday 09 October 2025 09:58:14 +0000 (0:00:00.182) 0:00:04.408 ****** 2025-10-09 09:58:21.768233 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:58:21.768243 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:58:21.768254 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:58:21.768264 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:58:21.768275 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:58:21.768286 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:58:21.768296 | orchestrator | 2025-10-09 09:58:21.768307 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-10-09 09:58:21.768317 | orchestrator | Thursday 09 October 2025 09:58:14 +0000 (0:00:00.201) 0:00:04.609 ****** 2025-10-09 09:58:21.768328 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:58:21.768340 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:58:21.768350 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:58:21.768360 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:58:21.768371 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:58:21.768382 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:58:21.768392 | orchestrator | 2025-10-09 09:58:21.768402 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-10-09 09:58:21.768413 | orchestrator | Thursday 09 October 2025 09:58:14 +0000 (0:00:00.665) 0:00:05.275 ****** 2025-10-09 09:58:21.768423 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:58:21.768434 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:58:21.768444 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:58:21.768454 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:58:21.768465 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:58:21.768476 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:58:21.768486 | orchestrator | 2025-10-09 09:58:21.768496 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-10-09 09:58:21.768507 | orchestrator | Thursday 09 October 2025 09:58:15 +0000 (0:00:00.914) 0:00:06.190 ****** 2025-10-09 09:58:21.768517 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-10-09 09:58:21.768535 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-10-09 09:58:21.768546 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-10-09 09:58:21.768557 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-10-09 09:58:21.768568 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-10-09 09:58:21.768578 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-10-09 09:58:21.768589 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-10-09 09:58:21.768599 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-10-09 09:58:21.768609 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-10-09 09:58:21.768638 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-10-09 09:58:21.768648 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-10-09 09:58:21.768659 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-10-09 09:58:21.768669 | orchestrator | 2025-10-09 09:58:21.768679 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-10-09 09:58:21.768690 | orchestrator | Thursday 09 October 2025 09:58:16 +0000 (0:00:01.189) 0:00:07.380 ****** 2025-10-09 09:58:21.768700 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:58:21.768709 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:58:21.768720 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:58:21.768730 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:58:21.768740 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:58:21.768750 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:58:21.768760 | orchestrator | 2025-10-09 09:58:21.768768 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-10-09 09:58:21.768775 | orchestrator | Thursday 09 October 2025 09:58:18 +0000 (0:00:01.291) 0:00:08.671 ****** 2025-10-09 09:58:21.768781 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-10-09 09:58:21.768787 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-10-09 09:58:21.768794 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-10-09 09:58:21.768800 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 09:58:21.768821 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 09:58:21.768827 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 09:58:21.768834 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 09:58:21.768840 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 09:58:21.768846 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 09:58:21.768852 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-10-09 09:58:21.768858 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-10-09 09:58:21.768864 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-10-09 09:58:21.768871 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-10-09 09:58:21.768877 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-10-09 09:58:21.768883 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-10-09 09:58:21.768889 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-10-09 09:58:21.768895 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-10-09 09:58:21.768902 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-10-09 09:58:21.768908 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-10-09 09:58:21.768914 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-10-09 09:58:21.768921 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-10-09 09:58:21.768927 | orchestrator | 2025-10-09 09:58:21.768933 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-10-09 09:58:21.768946 | orchestrator | Thursday 09 October 2025 09:58:19 +0000 (0:00:01.247) 0:00:09.919 ****** 2025-10-09 09:58:21.768952 | orchestrator | skipping: [testbed-node-0] 2025-10-09 09:58:21.768958 | orchestrator | skipping: [testbed-node-1] 2025-10-09 09:58:21.768965 | orchestrator | skipping: [testbed-node-2] 2025-10-09 09:58:21.768971 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:58:21.768977 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:58:21.768983 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:58:21.768989 | orchestrator | 2025-10-09 09:58:21.768995 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-10-09 09:58:21.769002 | orchestrator | Thursday 09 October 2025 09:58:19 +0000 (0:00:00.155) 0:00:10.074 ****** 2025-10-09 09:58:21.769008 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:58:21.769014 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:58:21.769020 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:58:21.769026 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:58:21.769032 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:58:21.769039 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:58:21.769045 | orchestrator | 2025-10-09 09:58:21.769051 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-10-09 09:58:21.769057 | orchestrator | Thursday 09 October 2025 09:58:20 +0000 (0:00:00.626) 0:00:10.700 ****** 2025-10-09 09:58:21.769063 | orchestrator | skipping: [testbed-node-0] 2025-10-09 09:58:21.769070 | orchestrator | skipping: [testbed-node-1] 2025-10-09 09:58:21.769076 | orchestrator | skipping: [testbed-node-2] 2025-10-09 09:58:21.769082 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:58:21.769088 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:58:21.769094 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:58:21.769100 | orchestrator | 2025-10-09 09:58:21.769106 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-10-09 09:58:21.769113 | orchestrator | Thursday 09 October 2025 09:58:20 +0000 (0:00:00.216) 0:00:10.917 ****** 2025-10-09 09:58:21.769119 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-10-09 09:58:21.769125 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 09:58:21.769131 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:58:21.769138 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:58:21.769144 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-10-09 09:58:21.769150 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:58:21.769156 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 09:58:21.769169 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:58:21.769176 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 09:58:21.769182 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:58:21.769188 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 09:58:21.769194 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:58:21.769200 | orchestrator | 2025-10-09 09:58:21.769207 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-10-09 09:58:21.769213 | orchestrator | Thursday 09 October 2025 09:58:21 +0000 (0:00:00.741) 0:00:11.659 ****** 2025-10-09 09:58:21.769219 | orchestrator | skipping: [testbed-node-0] 2025-10-09 09:58:21.769225 | orchestrator | skipping: [testbed-node-1] 2025-10-09 09:58:21.769231 | orchestrator | skipping: [testbed-node-2] 2025-10-09 09:58:21.769238 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:58:21.769244 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:58:21.769250 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:58:21.769256 | orchestrator | 2025-10-09 09:58:21.769262 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-10-09 09:58:21.769268 | orchestrator | Thursday 09 October 2025 09:58:21 +0000 (0:00:00.166) 0:00:11.825 ****** 2025-10-09 09:58:21.769274 | orchestrator | skipping: [testbed-node-0] 2025-10-09 09:58:21.769280 | orchestrator | skipping: [testbed-node-1] 2025-10-09 09:58:21.769287 | orchestrator | skipping: [testbed-node-2] 2025-10-09 09:58:21.769293 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:58:21.769303 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:58:21.769309 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:58:21.769315 | orchestrator | 2025-10-09 09:58:21.769322 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-10-09 09:58:21.769328 | orchestrator | Thursday 09 October 2025 09:58:21 +0000 (0:00:00.175) 0:00:12.001 ****** 2025-10-09 09:58:21.769334 | orchestrator | skipping: [testbed-node-0] 2025-10-09 09:58:21.769340 | orchestrator | skipping: [testbed-node-1] 2025-10-09 09:58:21.769346 | orchestrator | skipping: [testbed-node-2] 2025-10-09 09:58:21.769352 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:58:21.769363 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:58:22.971713 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:58:22.971797 | orchestrator | 2025-10-09 09:58:22.971808 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-10-09 09:58:22.971816 | orchestrator | Thursday 09 October 2025 09:58:21 +0000 (0:00:00.152) 0:00:12.153 ****** 2025-10-09 09:58:22.971823 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:58:22.971830 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:58:22.971836 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:58:22.971843 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:58:22.971849 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:58:22.971855 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:58:22.971861 | orchestrator | 2025-10-09 09:58:22.971868 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-10-09 09:58:22.971874 | orchestrator | Thursday 09 October 2025 09:58:22 +0000 (0:00:00.681) 0:00:12.834 ****** 2025-10-09 09:58:22.971881 | orchestrator | skipping: [testbed-node-0] 2025-10-09 09:58:22.971887 | orchestrator | skipping: [testbed-node-1] 2025-10-09 09:58:22.971893 | orchestrator | skipping: [testbed-node-2] 2025-10-09 09:58:22.971899 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:58:22.971905 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:58:22.971926 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:58:22.971933 | orchestrator | 2025-10-09 09:58:22.971939 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:58:22.971946 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 09:58:22.971954 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 09:58:22.971960 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 09:58:22.971966 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 09:58:22.971972 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 09:58:22.971978 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 09:58:22.971984 | orchestrator | 2025-10-09 09:58:22.971991 | orchestrator | 2025-10-09 09:58:22.971997 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:58:22.972004 | orchestrator | Thursday 09 October 2025 09:58:22 +0000 (0:00:00.262) 0:00:13.097 ****** 2025-10-09 09:58:22.972011 | orchestrator | =============================================================================== 2025-10-09 09:58:22.972017 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2025-10-09 09:58:22.972023 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2025-10-09 09:58:22.972029 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2025-10-09 09:58:22.972051 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-10-09 09:58:22.972057 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.91s 2025-10-09 09:58:22.972063 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2025-10-09 09:58:22.972069 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-10-09 09:58:22.972075 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2025-10-09 09:58:22.972082 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2025-10-09 09:58:22.972088 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2025-10-09 09:58:22.972094 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-10-09 09:58:22.972100 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2025-10-09 09:58:22.972106 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2025-10-09 09:58:22.972112 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-10-09 09:58:22.972119 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-10-09 09:58:22.972125 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-10-09 09:58:22.972131 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-10-09 09:58:22.972137 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-10-09 09:58:23.311640 | orchestrator | + osism apply --environment custom facts 2025-10-09 09:58:25.238831 | orchestrator | 2025-10-09 09:58:25 | INFO  | Trying to run play facts in environment custom 2025-10-09 09:58:35.419802 | orchestrator | 2025-10-09 09:58:35 | INFO  | Task 6307fade-d920-4b12-a2b2-ace93fd9d25a (facts) was prepared for execution. 2025-10-09 09:58:35.419922 | orchestrator | 2025-10-09 09:58:35 | INFO  | It takes a moment until task 6307fade-d920-4b12-a2b2-ace93fd9d25a (facts) has been started and output is visible here. 2025-10-09 09:59:19.983781 | orchestrator | 2025-10-09 09:59:19.983902 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-10-09 09:59:19.983919 | orchestrator | 2025-10-09 09:59:19.983931 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-09 09:59:19.983943 | orchestrator | Thursday 09 October 2025 09:58:39 +0000 (0:00:00.089) 0:00:00.089 ****** 2025-10-09 09:59:19.983955 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:19.983968 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:59:19.983979 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:59:19.983990 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:59:19.984000 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:19.984011 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:19.984022 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:19.984033 | orchestrator | 2025-10-09 09:59:19.984044 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-10-09 09:59:19.984055 | orchestrator | Thursday 09 October 2025 09:58:41 +0000 (0:00:01.446) 0:00:01.535 ****** 2025-10-09 09:59:19.984066 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:19.984077 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:59:19.984088 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:19.984099 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:59:19.984110 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:19.984121 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:59:19.984132 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:19.984142 | orchestrator | 2025-10-09 09:59:19.984153 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-10-09 09:59:19.984164 | orchestrator | 2025-10-09 09:59:19.984175 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-09 09:59:19.984210 | orchestrator | Thursday 09 October 2025 09:58:42 +0000 (0:00:01.286) 0:00:02.821 ****** 2025-10-09 09:59:19.984222 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:19.984234 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:19.984245 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:19.984256 | orchestrator | 2025-10-09 09:59:19.984267 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-09 09:59:19.984278 | orchestrator | Thursday 09 October 2025 09:58:42 +0000 (0:00:00.156) 0:00:02.978 ****** 2025-10-09 09:59:19.984289 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:19.984300 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:19.984311 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:19.984323 | orchestrator | 2025-10-09 09:59:19.984335 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-09 09:59:19.984348 | orchestrator | Thursday 09 October 2025 09:58:42 +0000 (0:00:00.237) 0:00:03.216 ****** 2025-10-09 09:59:19.984360 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:19.984372 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:19.984384 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:19.984397 | orchestrator | 2025-10-09 09:59:19.984409 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-09 09:59:19.984421 | orchestrator | Thursday 09 October 2025 09:58:42 +0000 (0:00:00.206) 0:00:03.422 ****** 2025-10-09 09:59:19.984435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 09:59:19.984449 | orchestrator | 2025-10-09 09:59:19.984461 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-09 09:59:19.984474 | orchestrator | Thursday 09 October 2025 09:58:43 +0000 (0:00:00.155) 0:00:03.577 ****** 2025-10-09 09:59:19.984486 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:19.984498 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:19.984508 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:19.984519 | orchestrator | 2025-10-09 09:59:19.984529 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-09 09:59:19.984540 | orchestrator | Thursday 09 October 2025 09:58:43 +0000 (0:00:00.444) 0:00:04.022 ****** 2025-10-09 09:59:19.984551 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:59:19.984562 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:59:19.984572 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:59:19.984583 | orchestrator | 2025-10-09 09:59:19.984618 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-09 09:59:19.984630 | orchestrator | Thursday 09 October 2025 09:58:43 +0000 (0:00:00.121) 0:00:04.144 ****** 2025-10-09 09:59:19.984641 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:19.984652 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:19.984663 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:19.984674 | orchestrator | 2025-10-09 09:59:19.984685 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-09 09:59:19.984695 | orchestrator | Thursday 09 October 2025 09:58:44 +0000 (0:00:01.077) 0:00:05.221 ****** 2025-10-09 09:59:19.984706 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:19.984717 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:19.984728 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:19.984738 | orchestrator | 2025-10-09 09:59:19.984770 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-09 09:59:19.984805 | orchestrator | Thursday 09 October 2025 09:58:45 +0000 (0:00:00.508) 0:00:05.730 ****** 2025-10-09 09:59:19.984830 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:19.984842 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:19.984853 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:19.984863 | orchestrator | 2025-10-09 09:59:19.984874 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-09 09:59:19.984885 | orchestrator | Thursday 09 October 2025 09:58:46 +0000 (0:00:01.083) 0:00:06.814 ****** 2025-10-09 09:59:19.984896 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:19.984915 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:19.984926 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:19.984936 | orchestrator | 2025-10-09 09:59:19.984947 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-10-09 09:59:19.984958 | orchestrator | Thursday 09 October 2025 09:59:03 +0000 (0:00:16.906) 0:00:23.720 ****** 2025-10-09 09:59:19.985017 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:59:19.985030 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:59:19.985041 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:59:19.985051 | orchestrator | 2025-10-09 09:59:19.985063 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-10-09 09:59:19.985091 | orchestrator | Thursday 09 October 2025 09:59:03 +0000 (0:00:00.120) 0:00:23.840 ****** 2025-10-09 09:59:19.985102 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:19.985113 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:19.985124 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:19.985135 | orchestrator | 2025-10-09 09:59:19.985146 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-09 09:59:19.985157 | orchestrator | Thursday 09 October 2025 09:59:10 +0000 (0:00:07.262) 0:00:31.103 ****** 2025-10-09 09:59:19.985167 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:19.985178 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:19.985189 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:19.985200 | orchestrator | 2025-10-09 09:59:19.985210 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-10-09 09:59:19.985221 | orchestrator | Thursday 09 October 2025 09:59:11 +0000 (0:00:00.446) 0:00:31.550 ****** 2025-10-09 09:59:19.985232 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-10-09 09:59:19.985243 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-10-09 09:59:19.985254 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-10-09 09:59:19.985270 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-10-09 09:59:19.985281 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-10-09 09:59:19.985292 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-10-09 09:59:19.985303 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-10-09 09:59:19.985313 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-10-09 09:59:19.985324 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-10-09 09:59:19.985335 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-10-09 09:59:19.985346 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-10-09 09:59:19.985356 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-10-09 09:59:19.985367 | orchestrator | 2025-10-09 09:59:19.985378 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-09 09:59:19.985389 | orchestrator | Thursday 09 October 2025 09:59:14 +0000 (0:00:03.615) 0:00:35.166 ****** 2025-10-09 09:59:19.985400 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:19.985411 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:19.985421 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:19.985432 | orchestrator | 2025-10-09 09:59:19.985443 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 09:59:19.985454 | orchestrator | 2025-10-09 09:59:19.985464 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 09:59:19.985475 | orchestrator | Thursday 09 October 2025 09:59:16 +0000 (0:00:01.328) 0:00:36.495 ****** 2025-10-09 09:59:19.985486 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:59:19.985497 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:59:19.985507 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:59:19.985518 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:19.985529 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:19.985548 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:19.985559 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:19.985570 | orchestrator | 2025-10-09 09:59:19.985580 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:59:19.985628 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 09:59:19.985640 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 09:59:19.985652 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 09:59:19.985663 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 09:59:19.985674 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 09:59:19.985685 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 09:59:19.985695 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 09:59:19.985706 | orchestrator | 2025-10-09 09:59:19.985717 | orchestrator | 2025-10-09 09:59:19.985728 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:59:19.985739 | orchestrator | Thursday 09 October 2025 09:59:19 +0000 (0:00:03.946) 0:00:40.441 ****** 2025-10-09 09:59:19.985750 | orchestrator | =============================================================================== 2025-10-09 09:59:19.985761 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.91s 2025-10-09 09:59:19.985771 | orchestrator | Install required packages (Debian) -------------------------------------- 7.26s 2025-10-09 09:59:19.985782 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.95s 2025-10-09 09:59:19.985793 | orchestrator | Copy fact files --------------------------------------------------------- 3.62s 2025-10-09 09:59:19.985804 | orchestrator | Create custom facts directory ------------------------------------------- 1.45s 2025-10-09 09:59:19.985814 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.33s 2025-10-09 09:59:19.985831 | orchestrator | Copy fact file ---------------------------------------------------------- 1.29s 2025-10-09 09:59:20.240484 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2025-10-09 09:59:20.240584 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2025-10-09 09:59:20.240652 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.51s 2025-10-09 09:59:20.240664 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-10-09 09:59:20.240675 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-10-09 09:59:20.240687 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.24s 2025-10-09 09:59:20.240697 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-10-09 09:59:20.240708 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.16s 2025-10-09 09:59:20.240719 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-10-09 09:59:20.240749 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-10-09 09:59:20.240761 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-10-09 09:59:20.570830 | orchestrator | + osism apply bootstrap 2025-10-09 09:59:32.666718 | orchestrator | 2025-10-09 09:59:32 | INFO  | Task 1aefe4eb-ca5b-437c-aebb-4526dd48d0a1 (bootstrap) was prepared for execution. 2025-10-09 09:59:32.666856 | orchestrator | 2025-10-09 09:59:32 | INFO  | It takes a moment until task 1aefe4eb-ca5b-437c-aebb-4526dd48d0a1 (bootstrap) has been started and output is visible here. 2025-10-09 09:59:49.228263 | orchestrator | 2025-10-09 09:59:49.228374 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-10-09 09:59:49.228391 | orchestrator | 2025-10-09 09:59:49.228403 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-10-09 09:59:49.228415 | orchestrator | Thursday 09 October 2025 09:59:37 +0000 (0:00:00.177) 0:00:00.177 ****** 2025-10-09 09:59:49.228426 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:49.228439 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:49.228450 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:49.228460 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:49.228471 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:59:49.228482 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:59:49.228492 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:59:49.228503 | orchestrator | 2025-10-09 09:59:49.228514 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 09:59:49.228524 | orchestrator | 2025-10-09 09:59:49.228535 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 09:59:49.228546 | orchestrator | Thursday 09 October 2025 09:59:37 +0000 (0:00:00.286) 0:00:00.464 ****** 2025-10-09 09:59:49.228557 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:59:49.228568 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:59:49.228627 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:59:49.228639 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:49.228650 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:49.228661 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:49.228671 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:49.228682 | orchestrator | 2025-10-09 09:59:49.228694 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-10-09 09:59:49.228705 | orchestrator | 2025-10-09 09:59:49.228716 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 09:59:49.228727 | orchestrator | Thursday 09 October 2025 09:59:41 +0000 (0:00:03.648) 0:00:04.113 ****** 2025-10-09 09:59:49.228740 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-10-09 09:59:49.228751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-10-09 09:59:49.228762 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-10-09 09:59:49.228773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 09:59:49.228784 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-10-09 09:59:49.228796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 09:59:49.228810 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-10-09 09:59:49.228822 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-10-09 09:59:49.228835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 09:59:49.228847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-10-09 09:59:49.228859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-09 09:59:49.228872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-10-09 09:59:49.228885 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-10-09 09:59:49.228897 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-09 09:59:49.228910 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-10-09 09:59:49.228922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-09 09:59:49.228934 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-10-09 09:59:49.228947 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-09 09:59:49.228960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-10-09 09:59:49.228972 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-10-09 09:59:49.229010 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-10-09 09:59:49.229023 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-09 09:59:49.229036 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-09 09:59:49.229048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-09 09:59:49.229060 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:59:49.229074 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-10-09 09:59:49.229086 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:59:49.229098 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-09 09:59:49.229110 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-10-09 09:59:49.229122 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-10-09 09:59:49.229135 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-09 09:59:49.229148 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:59:49.229159 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-10-09 09:59:49.229170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-09 09:59:49.229181 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-10-09 09:59:49.229192 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-10-09 09:59:49.229202 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-10-09 09:59:49.229213 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 09:59:49.229224 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-09 09:59:49.229235 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-10-09 09:59:49.229246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 09:59:49.229257 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-10-09 09:59:49.229268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-09 09:59:49.229279 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-10-09 09:59:49.229290 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 09:59:49.229301 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-10-09 09:59:49.229329 | orchestrator | skipping: [testbed-node-0] 2025-10-09 09:59:49.229341 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-10-09 09:59:49.229352 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-09 09:59:49.229363 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-10-09 09:59:49.229374 | orchestrator | skipping: [testbed-node-1] 2025-10-09 09:59:49.229385 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:59:49.229396 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-10-09 09:59:49.229407 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-10-09 09:59:49.229418 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-10-09 09:59:49.229428 | orchestrator | skipping: [testbed-node-2] 2025-10-09 09:59:49.229439 | orchestrator | 2025-10-09 09:59:49.229450 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-10-09 09:59:49.229461 | orchestrator | 2025-10-09 09:59:49.229472 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-10-09 09:59:49.229483 | orchestrator | Thursday 09 October 2025 09:59:41 +0000 (0:00:00.483) 0:00:04.597 ****** 2025-10-09 09:59:49.229513 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:59:49.229525 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:49.229536 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:49.229546 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:59:49.229557 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:59:49.229568 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:49.229598 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:49.229609 | orchestrator | 2025-10-09 09:59:49.229620 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-10-09 09:59:49.229641 | orchestrator | Thursday 09 October 2025 09:59:42 +0000 (0:00:01.238) 0:00:05.835 ****** 2025-10-09 09:59:49.229652 | orchestrator | ok: [testbed-node-1] 2025-10-09 09:59:49.229663 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:49.229674 | orchestrator | ok: [testbed-node-4] 2025-10-09 09:59:49.229684 | orchestrator | ok: [testbed-node-2] 2025-10-09 09:59:49.229695 | orchestrator | ok: [testbed-node-0] 2025-10-09 09:59:49.229706 | orchestrator | ok: [testbed-node-3] 2025-10-09 09:59:49.229717 | orchestrator | ok: [testbed-node-5] 2025-10-09 09:59:49.229727 | orchestrator | 2025-10-09 09:59:49.229738 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-10-09 09:59:49.229749 | orchestrator | Thursday 09 October 2025 09:59:44 +0000 (0:00:01.326) 0:00:07.162 ****** 2025-10-09 09:59:49.229762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 09:59:49.229775 | orchestrator | 2025-10-09 09:59:49.229786 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-10-09 09:59:49.229797 | orchestrator | Thursday 09 October 2025 09:59:44 +0000 (0:00:00.322) 0:00:07.484 ****** 2025-10-09 09:59:49.229808 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:49.229819 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:49.229830 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:59:49.229841 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:59:49.229852 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:59:49.229862 | orchestrator | changed: [testbed-manager] 2025-10-09 09:59:49.229894 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:49.229913 | orchestrator | 2025-10-09 09:59:49.229932 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-10-09 09:59:49.229950 | orchestrator | Thursday 09 October 2025 09:59:46 +0000 (0:00:02.188) 0:00:09.673 ****** 2025-10-09 09:59:49.229966 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:59:49.229979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 09:59:49.229992 | orchestrator | 2025-10-09 09:59:49.230003 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-10-09 09:59:49.230014 | orchestrator | Thursday 09 October 2025 09:59:46 +0000 (0:00:00.316) 0:00:09.989 ****** 2025-10-09 09:59:49.230080 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:49.230092 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:49.230103 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:49.230114 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:59:49.230125 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:59:49.230135 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:59:49.230146 | orchestrator | 2025-10-09 09:59:49.230190 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-10-09 09:59:49.230202 | orchestrator | Thursday 09 October 2025 09:59:48 +0000 (0:00:01.074) 0:00:11.064 ****** 2025-10-09 09:59:49.230213 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:59:49.230224 | orchestrator | changed: [testbed-node-2] 2025-10-09 09:59:49.230235 | orchestrator | changed: [testbed-node-1] 2025-10-09 09:59:49.230246 | orchestrator | changed: [testbed-node-4] 2025-10-09 09:59:49.230257 | orchestrator | changed: [testbed-node-5] 2025-10-09 09:59:49.230267 | orchestrator | changed: [testbed-node-3] 2025-10-09 09:59:49.230278 | orchestrator | changed: [testbed-node-0] 2025-10-09 09:59:49.230289 | orchestrator | 2025-10-09 09:59:49.230300 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-10-09 09:59:49.230311 | orchestrator | Thursday 09 October 2025 09:59:48 +0000 (0:00:00.578) 0:00:11.643 ****** 2025-10-09 09:59:49.230323 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:59:49.230340 | orchestrator | skipping: [testbed-node-4] 2025-10-09 09:59:49.230358 | orchestrator | skipping: [testbed-node-5] 2025-10-09 09:59:49.230369 | orchestrator | skipping: [testbed-node-0] 2025-10-09 09:59:49.230379 | orchestrator | skipping: [testbed-node-1] 2025-10-09 09:59:49.230390 | orchestrator | skipping: [testbed-node-2] 2025-10-09 09:59:49.230401 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:49.230412 | orchestrator | 2025-10-09 09:59:49.230423 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-10-09 09:59:49.230435 | orchestrator | Thursday 09 October 2025 09:59:49 +0000 (0:00:00.463) 0:00:12.107 ****** 2025-10-09 09:59:49.230446 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:59:49.230456 | orchestrator | skipping: [testbed-node-3] 2025-10-09 09:59:49.230478 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:03.126398 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:03.126513 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:03.126528 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:03.126540 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:03.126551 | orchestrator | 2025-10-09 10:00:03.126564 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-10-09 10:00:03.126625 | orchestrator | Thursday 09 October 2025 09:59:49 +0000 (0:00:00.252) 0:00:12.359 ****** 2025-10-09 10:00:03.126637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:00:03.126666 | orchestrator | 2025-10-09 10:00:03.126678 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-10-09 10:00:03.126690 | orchestrator | Thursday 09 October 2025 09:59:49 +0000 (0:00:00.352) 0:00:12.711 ****** 2025-10-09 10:00:03.126701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:00:03.126713 | orchestrator | 2025-10-09 10:00:03.126724 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-10-09 10:00:03.126735 | orchestrator | Thursday 09 October 2025 09:59:50 +0000 (0:00:00.372) 0:00:13.084 ****** 2025-10-09 10:00:03.126746 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.126758 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:03.126768 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:03.126779 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.126790 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:03.126801 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.126812 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.126823 | orchestrator | 2025-10-09 10:00:03.126834 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-10-09 10:00:03.126845 | orchestrator | Thursday 09 October 2025 09:59:51 +0000 (0:00:01.555) 0:00:14.640 ****** 2025-10-09 10:00:03.126856 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:00:03.126867 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:03.126878 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:03.126889 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:03.126899 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:03.126910 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:03.126922 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:03.126934 | orchestrator | 2025-10-09 10:00:03.126947 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-10-09 10:00:03.126960 | orchestrator | Thursday 09 October 2025 09:59:51 +0000 (0:00:00.258) 0:00:14.898 ****** 2025-10-09 10:00:03.126974 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.126987 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.127000 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.127012 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.127025 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:03.127062 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:03.127075 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:03.127089 | orchestrator | 2025-10-09 10:00:03.127101 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-10-09 10:00:03.127114 | orchestrator | Thursday 09 October 2025 09:59:52 +0000 (0:00:00.518) 0:00:15.416 ****** 2025-10-09 10:00:03.127127 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:00:03.127139 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:03.127152 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:03.127164 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:03.127177 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:03.127189 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:03.127202 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:03.127214 | orchestrator | 2025-10-09 10:00:03.127228 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-10-09 10:00:03.127242 | orchestrator | Thursday 09 October 2025 09:59:52 +0000 (0:00:00.303) 0:00:15.720 ****** 2025-10-09 10:00:03.127254 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:03.127268 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.127280 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:03.127291 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:03.127302 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:03.127312 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:03.127323 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:03.127334 | orchestrator | 2025-10-09 10:00:03.127345 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-10-09 10:00:03.127356 | orchestrator | Thursday 09 October 2025 09:59:53 +0000 (0:00:00.571) 0:00:16.292 ****** 2025-10-09 10:00:03.127367 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.127377 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:03.127389 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:03.127400 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:03.127411 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:03.127422 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:03.127433 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:03.127443 | orchestrator | 2025-10-09 10:00:03.127454 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-10-09 10:00:03.127474 | orchestrator | Thursday 09 October 2025 09:59:54 +0000 (0:00:01.103) 0:00:17.395 ****** 2025-10-09 10:00:03.127486 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.127497 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:03.127508 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.127518 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.127529 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:03.127540 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.127551 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:03.127562 | orchestrator | 2025-10-09 10:00:03.127591 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-10-09 10:00:03.127603 | orchestrator | Thursday 09 October 2025 09:59:56 +0000 (0:00:02.177) 0:00:19.573 ****** 2025-10-09 10:00:03.127630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:00:03.127642 | orchestrator | 2025-10-09 10:00:03.127654 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-10-09 10:00:03.127665 | orchestrator | Thursday 09 October 2025 09:59:57 +0000 (0:00:00.470) 0:00:20.043 ****** 2025-10-09 10:00:03.127676 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:00:03.127687 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:03.127698 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:03.127709 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:03.127719 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:03.127739 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:03.127750 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:03.127761 | orchestrator | 2025-10-09 10:00:03.127772 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-09 10:00:03.127783 | orchestrator | Thursday 09 October 2025 09:59:58 +0000 (0:00:01.372) 0:00:21.416 ****** 2025-10-09 10:00:03.127794 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.127805 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.127816 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.127827 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.127838 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:03.127849 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:03.127859 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:03.127870 | orchestrator | 2025-10-09 10:00:03.127882 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-09 10:00:03.127893 | orchestrator | Thursday 09 October 2025 09:59:58 +0000 (0:00:00.264) 0:00:21.680 ****** 2025-10-09 10:00:03.127904 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.127915 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.127925 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.127936 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.127947 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:03.127957 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:03.127968 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:03.127979 | orchestrator | 2025-10-09 10:00:03.127990 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-09 10:00:03.128001 | orchestrator | Thursday 09 October 2025 09:59:58 +0000 (0:00:00.262) 0:00:21.942 ****** 2025-10-09 10:00:03.128012 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.128023 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.128034 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.128045 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.128056 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:03.128066 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:03.128077 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:03.128088 | orchestrator | 2025-10-09 10:00:03.128099 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-09 10:00:03.128110 | orchestrator | Thursday 09 October 2025 09:59:59 +0000 (0:00:00.240) 0:00:22.183 ****** 2025-10-09 10:00:03.128122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:00:03.128135 | orchestrator | 2025-10-09 10:00:03.128146 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-09 10:00:03.128157 | orchestrator | Thursday 09 October 2025 09:59:59 +0000 (0:00:00.294) 0:00:22.478 ****** 2025-10-09 10:00:03.128168 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.128179 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.128190 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.128201 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:03.128212 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.128222 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:03.128233 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:03.128244 | orchestrator | 2025-10-09 10:00:03.128255 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-09 10:00:03.128266 | orchestrator | Thursday 09 October 2025 09:59:59 +0000 (0:00:00.528) 0:00:23.006 ****** 2025-10-09 10:00:03.128277 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:00:03.128288 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:03.128299 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:03.128310 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:03.128321 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:03.128331 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:03.128342 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:03.128360 | orchestrator | 2025-10-09 10:00:03.128371 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-09 10:00:03.128382 | orchestrator | Thursday 09 October 2025 10:00:00 +0000 (0:00:00.276) 0:00:23.282 ****** 2025-10-09 10:00:03.128393 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.128404 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.128415 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.128426 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.128437 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:03.128448 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:03.128459 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:03.128470 | orchestrator | 2025-10-09 10:00:03.128481 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-09 10:00:03.128492 | orchestrator | Thursday 09 October 2025 10:00:01 +0000 (0:00:01.171) 0:00:24.453 ****** 2025-10-09 10:00:03.128503 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.128514 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.128525 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.128536 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.128547 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:03.128558 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:03.128585 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:03.128596 | orchestrator | 2025-10-09 10:00:03.128607 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-09 10:00:03.128618 | orchestrator | Thursday 09 October 2025 10:00:01 +0000 (0:00:00.563) 0:00:25.017 ****** 2025-10-09 10:00:03.128629 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:03.128640 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:03.128651 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:03.128662 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:03.128679 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:45.977027 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:45.977150 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:45.977167 | orchestrator | 2025-10-09 10:00:45.977181 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-09 10:00:45.977215 | orchestrator | Thursday 09 October 2025 10:00:03 +0000 (0:00:01.129) 0:00:26.147 ****** 2025-10-09 10:00:45.977227 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.977239 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.977251 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.977262 | orchestrator | changed: [testbed-manager] 2025-10-09 10:00:45.977273 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:45.977284 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:45.977295 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:45.977306 | orchestrator | 2025-10-09 10:00:45.977317 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-10-09 10:00:45.977328 | orchestrator | Thursday 09 October 2025 10:00:20 +0000 (0:00:17.113) 0:00:43.261 ****** 2025-10-09 10:00:45.977339 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.977350 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.977361 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.977372 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.977382 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.977393 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.977404 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.977415 | orchestrator | 2025-10-09 10:00:45.977426 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-10-09 10:00:45.977437 | orchestrator | Thursday 09 October 2025 10:00:20 +0000 (0:00:00.255) 0:00:43.516 ****** 2025-10-09 10:00:45.977448 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.977459 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.977470 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.977480 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.977491 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.977501 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.977512 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.977575 | orchestrator | 2025-10-09 10:00:45.977590 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-10-09 10:00:45.977604 | orchestrator | Thursday 09 October 2025 10:00:20 +0000 (0:00:00.233) 0:00:43.750 ****** 2025-10-09 10:00:45.977616 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.977629 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.977642 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.977654 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.977666 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.977679 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.977691 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.977703 | orchestrator | 2025-10-09 10:00:45.977715 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-10-09 10:00:45.977728 | orchestrator | Thursday 09 October 2025 10:00:20 +0000 (0:00:00.230) 0:00:43.980 ****** 2025-10-09 10:00:45.977742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:00:45.977756 | orchestrator | 2025-10-09 10:00:45.977770 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-10-09 10:00:45.977782 | orchestrator | Thursday 09 October 2025 10:00:21 +0000 (0:00:00.288) 0:00:44.269 ****** 2025-10-09 10:00:45.977795 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.977807 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.977820 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.977832 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.977845 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.977857 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.977869 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.977881 | orchestrator | 2025-10-09 10:00:45.977894 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-10-09 10:00:45.977907 | orchestrator | Thursday 09 October 2025 10:00:22 +0000 (0:00:01.466) 0:00:45.736 ****** 2025-10-09 10:00:45.977919 | orchestrator | changed: [testbed-manager] 2025-10-09 10:00:45.977930 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:45.977941 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:45.977952 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:45.977963 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:45.977973 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:45.977984 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:45.977995 | orchestrator | 2025-10-09 10:00:45.978006 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-10-09 10:00:45.978066 | orchestrator | Thursday 09 October 2025 10:00:23 +0000 (0:00:01.171) 0:00:46.907 ****** 2025-10-09 10:00:45.978079 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.978089 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.978100 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.978111 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.978122 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.978132 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.978143 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.978154 | orchestrator | 2025-10-09 10:00:45.978165 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-10-09 10:00:45.978175 | orchestrator | Thursday 09 October 2025 10:00:24 +0000 (0:00:00.818) 0:00:47.726 ****** 2025-10-09 10:00:45.978194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:00:45.978207 | orchestrator | 2025-10-09 10:00:45.978218 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-10-09 10:00:45.978229 | orchestrator | Thursday 09 October 2025 10:00:25 +0000 (0:00:00.364) 0:00:48.090 ****** 2025-10-09 10:00:45.978240 | orchestrator | changed: [testbed-manager] 2025-10-09 10:00:45.978259 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:45.978270 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:45.978280 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:45.978291 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:45.978302 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:45.978313 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:45.978323 | orchestrator | 2025-10-09 10:00:45.978353 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-10-09 10:00:45.978365 | orchestrator | Thursday 09 October 2025 10:00:26 +0000 (0:00:01.118) 0:00:49.208 ****** 2025-10-09 10:00:45.978376 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:00:45.978387 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:45.978398 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:45.978408 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:45.978419 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:45.978430 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:45.978441 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:45.978452 | orchestrator | 2025-10-09 10:00:45.978463 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-10-09 10:00:45.978474 | orchestrator | Thursday 09 October 2025 10:00:26 +0000 (0:00:00.342) 0:00:49.551 ****** 2025-10-09 10:00:45.978484 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:45.978495 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:45.978506 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:45.978517 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:45.978528 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:45.978538 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:45.978566 | orchestrator | changed: [testbed-manager] 2025-10-09 10:00:45.978577 | orchestrator | 2025-10-09 10:00:45.978589 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-10-09 10:00:45.978600 | orchestrator | Thursday 09 October 2025 10:00:40 +0000 (0:00:13.567) 0:01:03.118 ****** 2025-10-09 10:00:45.978611 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.978621 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.978632 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.978643 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.978654 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.978664 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.978675 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.978686 | orchestrator | 2025-10-09 10:00:45.978697 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-10-09 10:00:45.978708 | orchestrator | Thursday 09 October 2025 10:00:41 +0000 (0:00:01.579) 0:01:04.698 ****** 2025-10-09 10:00:45.978719 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.978730 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.978740 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.978751 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.978762 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.978772 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.978783 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.978794 | orchestrator | 2025-10-09 10:00:45.978805 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-10-09 10:00:45.978816 | orchestrator | Thursday 09 October 2025 10:00:42 +0000 (0:00:00.919) 0:01:05.617 ****** 2025-10-09 10:00:45.978827 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.978837 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.978848 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.978859 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.978870 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.978881 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.978891 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.978902 | orchestrator | 2025-10-09 10:00:45.978913 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-10-09 10:00:45.978924 | orchestrator | Thursday 09 October 2025 10:00:42 +0000 (0:00:00.285) 0:01:05.902 ****** 2025-10-09 10:00:45.978943 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.978954 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.978965 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.978975 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.978986 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.978997 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.979007 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.979018 | orchestrator | 2025-10-09 10:00:45.979029 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-10-09 10:00:45.979040 | orchestrator | Thursday 09 October 2025 10:00:43 +0000 (0:00:00.259) 0:01:06.162 ****** 2025-10-09 10:00:45.979051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:00:45.979062 | orchestrator | 2025-10-09 10:00:45.979073 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-10-09 10:00:45.979084 | orchestrator | Thursday 09 October 2025 10:00:43 +0000 (0:00:00.304) 0:01:06.467 ****** 2025-10-09 10:00:45.979095 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.979106 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.979117 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.979127 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.979138 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.979149 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.979159 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.979170 | orchestrator | 2025-10-09 10:00:45.979181 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-10-09 10:00:45.979192 | orchestrator | Thursday 09 October 2025 10:00:45 +0000 (0:00:01.692) 0:01:08.159 ****** 2025-10-09 10:00:45.979203 | orchestrator | changed: [testbed-manager] 2025-10-09 10:00:45.979214 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:45.979225 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:45.979236 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:45.979246 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:45.979262 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:45.979274 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:45.979284 | orchestrator | 2025-10-09 10:00:45.979295 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-10-09 10:00:45.979306 | orchestrator | Thursday 09 October 2025 10:00:45 +0000 (0:00:00.580) 0:01:08.739 ****** 2025-10-09 10:00:45.979317 | orchestrator | ok: [testbed-manager] 2025-10-09 10:00:45.979328 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:45.979339 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:45.979350 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:45.979361 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:45.979371 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:45.979382 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:45.979393 | orchestrator | 2025-10-09 10:00:45.979410 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-10-09 10:03:05.252739 | orchestrator | Thursday 09 October 2025 10:00:45 +0000 (0:00:00.257) 0:01:08.996 ****** 2025-10-09 10:03:05.252861 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:05.252878 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:05.252890 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:05.252901 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:05.252912 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:05.252923 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:05.252934 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:05.252945 | orchestrator | 2025-10-09 10:03:05.252957 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-10-09 10:03:05.252969 | orchestrator | Thursday 09 October 2025 10:00:47 +0000 (0:00:01.186) 0:01:10.183 ****** 2025-10-09 10:03:05.252980 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:03:05.253016 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:03:05.253028 | orchestrator | changed: [testbed-manager] 2025-10-09 10:03:05.253039 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:03:05.253049 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:03:05.253060 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:03:05.253070 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:03:05.253081 | orchestrator | 2025-10-09 10:03:05.253092 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-10-09 10:03:05.253102 | orchestrator | Thursday 09 October 2025 10:00:48 +0000 (0:00:01.803) 0:01:11.986 ****** 2025-10-09 10:03:05.253113 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:05.253124 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:05.253134 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:05.253145 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:05.253155 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:05.253166 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:05.253177 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:05.253187 | orchestrator | 2025-10-09 10:03:05.253198 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-10-09 10:03:05.253209 | orchestrator | Thursday 09 October 2025 10:00:51 +0000 (0:00:02.468) 0:01:14.455 ****** 2025-10-09 10:03:05.253219 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:05.253230 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:05.253241 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:05.253251 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:05.253262 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:05.253275 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:05.253288 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:05.253301 | orchestrator | 2025-10-09 10:03:05.253314 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-10-09 10:03:05.253327 | orchestrator | Thursday 09 October 2025 10:01:30 +0000 (0:00:38.677) 0:01:53.133 ****** 2025-10-09 10:03:05.253340 | orchestrator | changed: [testbed-manager] 2025-10-09 10:03:05.253353 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:03:05.253365 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:03:05.253377 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:03:05.253389 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:03:05.253402 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:03:05.253414 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:03:05.253426 | orchestrator | 2025-10-09 10:03:05.253439 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-10-09 10:03:05.253451 | orchestrator | Thursday 09 October 2025 10:02:46 +0000 (0:01:16.643) 0:03:09.776 ****** 2025-10-09 10:03:05.253464 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:05.253476 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:05.253513 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:05.253525 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:05.253537 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:05.253549 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:05.253561 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:05.253573 | orchestrator | 2025-10-09 10:03:05.253590 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-10-09 10:03:05.253605 | orchestrator | Thursday 09 October 2025 10:02:48 +0000 (0:00:01.619) 0:03:11.396 ****** 2025-10-09 10:03:05.253617 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:05.253628 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:05.253639 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:05.253650 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:05.253660 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:05.253671 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:05.253681 | orchestrator | changed: [testbed-manager] 2025-10-09 10:03:05.253692 | orchestrator | 2025-10-09 10:03:05.253703 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-10-09 10:03:05.253714 | orchestrator | Thursday 09 October 2025 10:03:02 +0000 (0:00:14.509) 0:03:25.905 ****** 2025-10-09 10:03:05.253740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-10-09 10:03:05.253773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-10-09 10:03:05.253811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-10-09 10:03:05.253826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-10-09 10:03:05.253838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-10-09 10:03:05.253859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-10-09 10:03:05.253871 | orchestrator | 2025-10-09 10:03:05.253883 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-10-09 10:03:05.253894 | orchestrator | Thursday 09 October 2025 10:03:03 +0000 (0:00:00.444) 0:03:26.350 ****** 2025-10-09 10:03:05.253905 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-09 10:03:05.253916 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:03:05.253927 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-09 10:03:05.253938 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-09 10:03:05.253949 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:03:05.253960 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:03:05.253971 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-09 10:03:05.253982 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:03:05.253993 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:03:05.254004 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:03:05.254014 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:03:05.254076 | orchestrator | 2025-10-09 10:03:05.254088 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-10-09 10:03:05.254106 | orchestrator | Thursday 09 October 2025 10:03:05 +0000 (0:00:01.766) 0:03:28.116 ****** 2025-10-09 10:03:05.254117 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-09 10:03:05.254130 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-09 10:03:05.254141 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-09 10:03:05.254151 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-09 10:03:05.254162 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-09 10:03:05.254173 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-09 10:03:05.254184 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-09 10:03:05.254195 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-09 10:03:05.254206 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-09 10:03:05.254217 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-09 10:03:05.254227 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-09 10:03:05.254238 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-09 10:03:05.254254 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-09 10:03:05.254265 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:03:05.254276 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-09 10:03:05.254287 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-09 10:03:05.254298 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-09 10:03:05.254309 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-09 10:03:05.254327 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-09 10:03:12.770826 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-09 10:03:12.770938 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-09 10:03:12.770955 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-09 10:03:12.770967 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-09 10:03:12.770979 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:03:12.770991 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-09 10:03:12.771003 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-09 10:03:12.771013 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-09 10:03:12.771024 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-09 10:03:12.771035 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-09 10:03:12.771047 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-09 10:03:12.771058 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-09 10:03:12.771069 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-09 10:03:12.771104 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-09 10:03:12.771116 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-09 10:03:12.771127 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-09 10:03:12.771138 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-09 10:03:12.771148 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-09 10:03:12.771159 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-09 10:03:12.771170 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-09 10:03:12.771180 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-09 10:03:12.771191 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-09 10:03:12.771201 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-09 10:03:12.771212 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:03:12.771223 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:03:12.771234 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-09 10:03:12.771259 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-09 10:03:12.771270 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-09 10:03:12.771281 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-09 10:03:12.771292 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-09 10:03:12.771302 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-09 10:03:12.771313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-09 10:03:12.771324 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-09 10:03:12.771335 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-09 10:03:12.771345 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-09 10:03:12.771356 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-09 10:03:12.771385 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-09 10:03:12.771397 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-09 10:03:12.771409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-09 10:03:12.771422 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-09 10:03:12.771433 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-09 10:03:12.771446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-09 10:03:12.771458 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-09 10:03:12.771516 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-09 10:03:12.771532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-09 10:03:12.771543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-09 10:03:12.771554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-09 10:03:12.771573 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-09 10:03:12.771584 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-09 10:03:12.771595 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-09 10:03:12.771606 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-09 10:03:12.771617 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-09 10:03:12.771628 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-09 10:03:12.771638 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-09 10:03:12.771649 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-09 10:03:12.771660 | orchestrator | 2025-10-09 10:03:12.771672 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-10-09 10:03:12.771683 | orchestrator | Thursday 09 October 2025 10:03:10 +0000 (0:00:04.919) 0:03:33.035 ****** 2025-10-09 10:03:12.771694 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:03:12.771705 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:03:12.771715 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:03:12.771726 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:03:12.771737 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:03:12.771748 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:03:12.771759 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:03:12.771770 | orchestrator | 2025-10-09 10:03:12.771781 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-10-09 10:03:12.771792 | orchestrator | Thursday 09 October 2025 10:03:11 +0000 (0:00:01.617) 0:03:34.653 ****** 2025-10-09 10:03:12.771802 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:03:12.771813 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:03:12.771824 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:03:12.771835 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:03:12.771846 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:03:12.771857 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:03:12.771867 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:03:12.771878 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:03:12.771889 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:03:12.771900 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:03:12.771911 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:03:12.771921 | orchestrator | 2025-10-09 10:03:12.771932 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-10-09 10:03:12.771943 | orchestrator | Thursday 09 October 2025 10:03:12 +0000 (0:00:00.499) 0:03:35.152 ****** 2025-10-09 10:03:12.771954 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:03:12.771965 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:03:12.771982 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:03:12.771993 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:03:12.772009 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:03:12.772021 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:03:12.772031 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:03:12.772042 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:03:12.772053 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:03:12.772064 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:03:12.772075 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:03:12.772086 | orchestrator | 2025-10-09 10:03:12.772102 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-10-09 10:03:26.547540 | orchestrator | Thursday 09 October 2025 10:03:12 +0000 (0:00:00.643) 0:03:35.796 ****** 2025-10-09 10:03:26.547658 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-09 10:03:26.547675 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:03:26.547688 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-09 10:03:26.547699 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:03:26.547710 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-09 10:03:26.547721 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:03:26.547732 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-09 10:03:26.547743 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:03:26.547754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-09 10:03:26.547765 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-09 10:03:26.547776 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-09 10:03:26.547786 | orchestrator | 2025-10-09 10:03:26.547798 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-10-09 10:03:26.547809 | orchestrator | Thursday 09 October 2025 10:03:14 +0000 (0:00:01.576) 0:03:37.372 ****** 2025-10-09 10:03:26.547820 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:03:26.547831 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:03:26.547842 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:03:26.547852 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:03:26.547863 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:03:26.547873 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:03:26.547884 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:03:26.547895 | orchestrator | 2025-10-09 10:03:26.547905 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-10-09 10:03:26.547916 | orchestrator | Thursday 09 October 2025 10:03:14 +0000 (0:00:00.346) 0:03:37.718 ****** 2025-10-09 10:03:26.547927 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:26.547939 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:26.547950 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:26.547960 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:26.547971 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:26.547981 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:26.547992 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:26.548002 | orchestrator | 2025-10-09 10:03:26.548013 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-10-09 10:03:26.548025 | orchestrator | Thursday 09 October 2025 10:03:20 +0000 (0:00:05.784) 0:03:43.502 ****** 2025-10-09 10:03:26.548062 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-10-09 10:03:26.548077 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:03:26.548089 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-10-09 10:03:26.548102 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:03:26.548113 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-10-09 10:03:26.548126 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:03:26.548138 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-10-09 10:03:26.548149 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-10-09 10:03:26.548162 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:03:26.548174 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:03:26.548186 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-10-09 10:03:26.548199 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:03:26.548211 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-10-09 10:03:26.548223 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:03:26.548236 | orchestrator | 2025-10-09 10:03:26.548248 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-10-09 10:03:26.548261 | orchestrator | Thursday 09 October 2025 10:03:20 +0000 (0:00:00.320) 0:03:43.823 ****** 2025-10-09 10:03:26.548273 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-10-09 10:03:26.548285 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-10-09 10:03:26.548297 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-10-09 10:03:26.548310 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-10-09 10:03:26.548322 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-10-09 10:03:26.548334 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-10-09 10:03:26.548347 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-10-09 10:03:26.548359 | orchestrator | 2025-10-09 10:03:26.548372 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-10-09 10:03:26.548384 | orchestrator | Thursday 09 October 2025 10:03:21 +0000 (0:00:01.052) 0:03:44.875 ****** 2025-10-09 10:03:26.548397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:03:26.548410 | orchestrator | 2025-10-09 10:03:26.548421 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-10-09 10:03:26.548432 | orchestrator | Thursday 09 October 2025 10:03:22 +0000 (0:00:00.563) 0:03:45.438 ****** 2025-10-09 10:03:26.548460 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:26.548491 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:26.548503 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:26.548514 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:26.548525 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:26.548536 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:26.548546 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:26.548557 | orchestrator | 2025-10-09 10:03:26.548568 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-10-09 10:03:26.548579 | orchestrator | Thursday 09 October 2025 10:03:23 +0000 (0:00:01.223) 0:03:46.662 ****** 2025-10-09 10:03:26.548591 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:26.548618 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:26.548629 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:26.548640 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:26.548651 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:26.548661 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:26.548672 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:26.548684 | orchestrator | 2025-10-09 10:03:26.548694 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-10-09 10:03:26.548706 | orchestrator | Thursday 09 October 2025 10:03:24 +0000 (0:00:00.611) 0:03:47.273 ****** 2025-10-09 10:03:26.548717 | orchestrator | changed: [testbed-manager] 2025-10-09 10:03:26.548736 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:03:26.548747 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:03:26.548758 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:03:26.548769 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:03:26.548779 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:03:26.548790 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:03:26.548801 | orchestrator | 2025-10-09 10:03:26.548812 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-10-09 10:03:26.548822 | orchestrator | Thursday 09 October 2025 10:03:24 +0000 (0:00:00.726) 0:03:47.999 ****** 2025-10-09 10:03:26.548833 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:26.548844 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:26.548855 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:26.548866 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:26.548876 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:26.548887 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:26.548898 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:26.548909 | orchestrator | 2025-10-09 10:03:26.548919 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-10-09 10:03:26.548930 | orchestrator | Thursday 09 October 2025 10:03:25 +0000 (0:00:00.605) 0:03:48.605 ****** 2025-10-09 10:03:26.548945 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002776.9316268, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:26.548959 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002801.9042163, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:26.548971 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002808.660363, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:26.548983 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002815.4372423, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:26.549000 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002816.4219635, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:26.549035 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002815.202476, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803072 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002796.538044, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803197 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803215 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803228 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803240 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803268 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803301 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803340 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:03:41.803354 | orchestrator | 2025-10-09 10:03:41.803368 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-10-09 10:03:41.803381 | orchestrator | Thursday 09 October 2025 10:03:26 +0000 (0:00:00.963) 0:03:49.568 ****** 2025-10-09 10:03:41.803392 | orchestrator | changed: [testbed-manager] 2025-10-09 10:03:41.803404 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:03:41.803415 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:03:41.803426 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:03:41.803437 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:03:41.803448 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:03:41.803458 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:03:41.803529 | orchestrator | 2025-10-09 10:03:41.803543 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-10-09 10:03:41.803554 | orchestrator | Thursday 09 October 2025 10:03:27 +0000 (0:00:01.135) 0:03:50.704 ****** 2025-10-09 10:03:41.803565 | orchestrator | changed: [testbed-manager] 2025-10-09 10:03:41.803576 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:03:41.803587 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:03:41.803598 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:03:41.803609 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:03:41.803619 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:03:41.803630 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:03:41.803641 | orchestrator | 2025-10-09 10:03:41.803652 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-10-09 10:03:41.803663 | orchestrator | Thursday 09 October 2025 10:03:28 +0000 (0:00:01.231) 0:03:51.935 ****** 2025-10-09 10:03:41.803674 | orchestrator | changed: [testbed-manager] 2025-10-09 10:03:41.803685 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:03:41.803696 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:03:41.803706 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:03:41.803717 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:03:41.803728 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:03:41.803739 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:03:41.803749 | orchestrator | 2025-10-09 10:03:41.803760 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-10-09 10:03:41.803772 | orchestrator | Thursday 09 October 2025 10:03:30 +0000 (0:00:01.128) 0:03:53.064 ****** 2025-10-09 10:03:41.803783 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:03:41.803793 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:03:41.803804 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:03:41.803815 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:03:41.803826 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:03:41.803846 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:03:41.803857 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:03:41.803868 | orchestrator | 2025-10-09 10:03:41.803879 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-10-09 10:03:41.803890 | orchestrator | Thursday 09 October 2025 10:03:30 +0000 (0:00:00.349) 0:03:53.413 ****** 2025-10-09 10:03:41.803901 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:41.803913 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:41.803923 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:41.803934 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:41.803945 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:41.803956 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:41.803967 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:41.803978 | orchestrator | 2025-10-09 10:03:41.803989 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-10-09 10:03:41.804000 | orchestrator | Thursday 09 October 2025 10:03:31 +0000 (0:00:00.787) 0:03:54.201 ****** 2025-10-09 10:03:41.804012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:03:41.804025 | orchestrator | 2025-10-09 10:03:41.804037 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-10-09 10:03:41.804048 | orchestrator | Thursday 09 October 2025 10:03:31 +0000 (0:00:00.436) 0:03:54.638 ****** 2025-10-09 10:03:41.804065 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:41.804076 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:03:41.804087 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:03:41.804097 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:03:41.804108 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:03:41.804119 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:03:41.804130 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:03:41.804141 | orchestrator | 2025-10-09 10:03:41.804152 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-10-09 10:03:41.804162 | orchestrator | Thursday 09 October 2025 10:03:39 +0000 (0:00:07.848) 0:04:02.487 ****** 2025-10-09 10:03:41.804174 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:41.804184 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:41.804195 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:41.804206 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:41.804217 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:41.804228 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:41.804238 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:41.804249 | orchestrator | 2025-10-09 10:03:41.804260 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-10-09 10:03:41.804271 | orchestrator | Thursday 09 October 2025 10:03:40 +0000 (0:00:01.291) 0:04:03.778 ****** 2025-10-09 10:03:41.804282 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:03:41.804293 | orchestrator | ok: [testbed-manager] 2025-10-09 10:03:41.804304 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:03:41.804314 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:03:41.804325 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:03:41.804336 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:03:41.804346 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:03:41.804357 | orchestrator | 2025-10-09 10:03:41.804376 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-10-09 10:04:51.643124 | orchestrator | Thursday 09 October 2025 10:03:41 +0000 (0:00:01.036) 0:04:04.817 ****** 2025-10-09 10:04:51.643247 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:51.643264 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:51.643276 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:51.643287 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:51.643297 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:51.643308 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:51.643319 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:51.643330 | orchestrator | 2025-10-09 10:04:51.643368 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-10-09 10:04:51.643381 | orchestrator | Thursday 09 October 2025 10:03:42 +0000 (0:00:00.343) 0:04:05.161 ****** 2025-10-09 10:04:51.643392 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:51.643402 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:51.643413 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:51.643424 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:51.643435 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:51.643474 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:51.643485 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:51.643496 | orchestrator | 2025-10-09 10:04:51.643507 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-10-09 10:04:51.643518 | orchestrator | Thursday 09 October 2025 10:03:42 +0000 (0:00:00.453) 0:04:05.615 ****** 2025-10-09 10:04:51.643529 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:51.643540 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:51.643550 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:51.643561 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:51.643571 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:51.643582 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:51.643592 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:51.643602 | orchestrator | 2025-10-09 10:04:51.643613 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-10-09 10:04:51.643624 | orchestrator | Thursday 09 October 2025 10:03:42 +0000 (0:00:00.333) 0:04:05.948 ****** 2025-10-09 10:04:51.643635 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:51.643646 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:51.643657 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:51.643670 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:51.643681 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:51.643693 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:51.643705 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:51.643717 | orchestrator | 2025-10-09 10:04:51.643729 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-10-09 10:04:51.643741 | orchestrator | Thursday 09 October 2025 10:03:48 +0000 (0:00:05.660) 0:04:11.609 ****** 2025-10-09 10:04:51.643755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:04:51.643770 | orchestrator | 2025-10-09 10:04:51.643783 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-10-09 10:04:51.643796 | orchestrator | Thursday 09 October 2025 10:03:48 +0000 (0:00:00.416) 0:04:12.026 ****** 2025-10-09 10:04:51.643809 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-10-09 10:04:51.643822 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-10-09 10:04:51.643835 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-10-09 10:04:51.643847 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-10-09 10:04:51.643860 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:04:51.643873 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-10-09 10:04:51.643885 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:04:51.643897 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-10-09 10:04:51.643909 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-10-09 10:04:51.643922 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-10-09 10:04:51.643934 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:04:51.643946 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-10-09 10:04:51.643958 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-10-09 10:04:51.643970 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:04:51.643983 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-10-09 10:04:51.644004 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-10-09 10:04:51.644017 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:04:51.644028 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:04:51.644038 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-10-09 10:04:51.644049 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-10-09 10:04:51.644060 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:04:51.644070 | orchestrator | 2025-10-09 10:04:51.644082 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-10-09 10:04:51.644092 | orchestrator | Thursday 09 October 2025 10:03:49 +0000 (0:00:00.369) 0:04:12.395 ****** 2025-10-09 10:04:51.644123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:04:51.644135 | orchestrator | 2025-10-09 10:04:51.644146 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-10-09 10:04:51.644156 | orchestrator | Thursday 09 October 2025 10:03:49 +0000 (0:00:00.464) 0:04:12.859 ****** 2025-10-09 10:04:51.644167 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-10-09 10:04:51.644178 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-10-09 10:04:51.644189 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:04:51.644200 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-10-09 10:04:51.644211 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:04:51.644239 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:04:51.644251 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-10-09 10:04:51.644262 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-10-09 10:04:51.644273 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:04:51.644284 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:04:51.644295 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-10-09 10:04:51.644306 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:04:51.644317 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-10-09 10:04:51.644328 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:04:51.644339 | orchestrator | 2025-10-09 10:04:51.644350 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-10-09 10:04:51.644362 | orchestrator | Thursday 09 October 2025 10:03:50 +0000 (0:00:00.340) 0:04:13.199 ****** 2025-10-09 10:04:51.644373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:04:51.644385 | orchestrator | 2025-10-09 10:04:51.644396 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-10-09 10:04:51.644407 | orchestrator | Thursday 09 October 2025 10:03:50 +0000 (0:00:00.446) 0:04:13.646 ****** 2025-10-09 10:04:51.644418 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:04:51.644429 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:04:51.644468 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:04:51.644480 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:04:51.644491 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:04:51.644502 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:04:51.644512 | orchestrator | changed: [testbed-manager] 2025-10-09 10:04:51.644523 | orchestrator | 2025-10-09 10:04:51.644534 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-10-09 10:04:51.644545 | orchestrator | Thursday 09 October 2025 10:04:25 +0000 (0:00:34.650) 0:04:48.297 ****** 2025-10-09 10:04:51.644556 | orchestrator | changed: [testbed-manager] 2025-10-09 10:04:51.644566 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:04:51.644577 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:04:51.644595 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:04:51.644606 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:04:51.644616 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:04:51.644627 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:04:51.644638 | orchestrator | 2025-10-09 10:04:51.644648 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-10-09 10:04:51.644659 | orchestrator | Thursday 09 October 2025 10:04:33 +0000 (0:00:07.958) 0:04:56.255 ****** 2025-10-09 10:04:51.644670 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:04:51.644681 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:04:51.644691 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:04:51.644702 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:04:51.644713 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:04:51.644723 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:04:51.644734 | orchestrator | changed: [testbed-manager] 2025-10-09 10:04:51.644744 | orchestrator | 2025-10-09 10:04:51.644755 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-10-09 10:04:51.644766 | orchestrator | Thursday 09 October 2025 10:04:40 +0000 (0:00:07.467) 0:05:03.722 ****** 2025-10-09 10:04:51.644777 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:51.644788 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:51.644798 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:51.644809 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:51.644820 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:51.644831 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:51.644841 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:51.644852 | orchestrator | 2025-10-09 10:04:51.644863 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-10-09 10:04:51.644874 | orchestrator | Thursday 09 October 2025 10:04:42 +0000 (0:00:01.792) 0:05:05.515 ****** 2025-10-09 10:04:51.644885 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:04:51.644896 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:04:51.644906 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:04:51.644917 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:04:51.644927 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:04:51.644938 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:04:51.644949 | orchestrator | changed: [testbed-manager] 2025-10-09 10:04:51.644959 | orchestrator | 2025-10-09 10:04:51.644976 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-10-09 10:04:51.644987 | orchestrator | Thursday 09 October 2025 10:04:48 +0000 (0:00:06.165) 0:05:11.681 ****** 2025-10-09 10:04:51.644999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:04:51.645012 | orchestrator | 2025-10-09 10:04:51.645023 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-10-09 10:04:51.645034 | orchestrator | Thursday 09 October 2025 10:04:49 +0000 (0:00:00.588) 0:05:12.269 ****** 2025-10-09 10:04:51.645044 | orchestrator | changed: [testbed-manager] 2025-10-09 10:04:51.645055 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:04:51.645066 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:04:51.645076 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:04:51.645087 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:04:51.645098 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:04:51.645108 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:04:51.645119 | orchestrator | 2025-10-09 10:04:51.645130 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-10-09 10:04:51.645141 | orchestrator | Thursday 09 October 2025 10:04:49 +0000 (0:00:00.762) 0:05:13.032 ****** 2025-10-09 10:04:51.645152 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:51.645162 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:51.645173 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:51.645184 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:51.645208 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:07.138548 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:07.138609 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:07.138618 | orchestrator | 2025-10-09 10:05:07.138625 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-10-09 10:05:07.138631 | orchestrator | Thursday 09 October 2025 10:04:51 +0000 (0:00:01.626) 0:05:14.658 ****** 2025-10-09 10:05:07.138637 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:05:07.138644 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:05:07.138650 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:05:07.138656 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:05:07.138662 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:05:07.138667 | orchestrator | changed: [testbed-manager] 2025-10-09 10:05:07.138673 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:05:07.138679 | orchestrator | 2025-10-09 10:05:07.138685 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-10-09 10:05:07.138691 | orchestrator | Thursday 09 October 2025 10:04:52 +0000 (0:00:00.787) 0:05:15.446 ****** 2025-10-09 10:05:07.138697 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:07.138703 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:07.138708 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:07.138714 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:07.138720 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:07.138725 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:07.138731 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:07.138737 | orchestrator | 2025-10-09 10:05:07.138743 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-10-09 10:05:07.138748 | orchestrator | Thursday 09 October 2025 10:04:52 +0000 (0:00:00.301) 0:05:15.747 ****** 2025-10-09 10:05:07.138754 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:07.138760 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:07.138766 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:07.138771 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:07.138777 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:07.138783 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:07.138788 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:07.138794 | orchestrator | 2025-10-09 10:05:07.138800 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-10-09 10:05:07.138806 | orchestrator | Thursday 09 October 2025 10:04:53 +0000 (0:00:00.441) 0:05:16.188 ****** 2025-10-09 10:05:07.138811 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:07.138816 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:07.138822 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:07.138827 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:07.138833 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:07.138839 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:07.138844 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:07.138850 | orchestrator | 2025-10-09 10:05:07.138856 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-10-09 10:05:07.138862 | orchestrator | Thursday 09 October 2025 10:04:53 +0000 (0:00:00.347) 0:05:16.536 ****** 2025-10-09 10:05:07.138868 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:07.138873 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:07.138879 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:07.138885 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:07.138891 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:07.138897 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:07.138903 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:07.138909 | orchestrator | 2025-10-09 10:05:07.138915 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-10-09 10:05:07.138921 | orchestrator | Thursday 09 October 2025 10:04:53 +0000 (0:00:00.310) 0:05:16.847 ****** 2025-10-09 10:05:07.138926 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:07.138932 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:07.138948 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:07.138954 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:07.138959 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:07.138965 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:07.138971 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:07.138977 | orchestrator | 2025-10-09 10:05:07.138983 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-10-09 10:05:07.138988 | orchestrator | Thursday 09 October 2025 10:04:54 +0000 (0:00:00.337) 0:05:17.185 ****** 2025-10-09 10:05:07.138994 | orchestrator | ok: [testbed-manager] =>  2025-10-09 10:05:07.139000 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:05:07.139006 | orchestrator | ok: [testbed-node-3] =>  2025-10-09 10:05:07.139012 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:05:07.139017 | orchestrator | ok: [testbed-node-4] =>  2025-10-09 10:05:07.139023 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:05:07.139028 | orchestrator | ok: [testbed-node-5] =>  2025-10-09 10:05:07.139038 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:05:07.139044 | orchestrator | ok: [testbed-node-0] =>  2025-10-09 10:05:07.139049 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:05:07.139055 | orchestrator | ok: [testbed-node-1] =>  2025-10-09 10:05:07.139060 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:05:07.139066 | orchestrator | ok: [testbed-node-2] =>  2025-10-09 10:05:07.139072 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:05:07.139077 | orchestrator | 2025-10-09 10:05:07.139083 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-10-09 10:05:07.139089 | orchestrator | Thursday 09 October 2025 10:04:54 +0000 (0:00:00.325) 0:05:17.510 ****** 2025-10-09 10:05:07.139095 | orchestrator | ok: [testbed-manager] =>  2025-10-09 10:05:07.139100 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:05:07.139106 | orchestrator | ok: [testbed-node-3] =>  2025-10-09 10:05:07.139112 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:05:07.139118 | orchestrator | ok: [testbed-node-4] =>  2025-10-09 10:05:07.139124 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:05:07.139129 | orchestrator | ok: [testbed-node-5] =>  2025-10-09 10:05:07.139135 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:05:07.139140 | orchestrator | ok: [testbed-node-0] =>  2025-10-09 10:05:07.139146 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:05:07.139152 | orchestrator | ok: [testbed-node-1] =>  2025-10-09 10:05:07.139159 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:05:07.139165 | orchestrator | ok: [testbed-node-2] =>  2025-10-09 10:05:07.139171 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:05:07.139177 | orchestrator | 2025-10-09 10:05:07.139183 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-10-09 10:05:07.139199 | orchestrator | Thursday 09 October 2025 10:04:54 +0000 (0:00:00.345) 0:05:17.856 ****** 2025-10-09 10:05:07.139206 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:07.139213 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:07.139219 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:07.139225 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:07.139231 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:07.139237 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:07.139243 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:07.139249 | orchestrator | 2025-10-09 10:05:07.139255 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-10-09 10:05:07.139262 | orchestrator | Thursday 09 October 2025 10:04:55 +0000 (0:00:00.293) 0:05:18.150 ****** 2025-10-09 10:05:07.139268 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:07.139274 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:07.139280 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:07.139286 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:07.139292 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:07.139298 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:07.139304 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:07.139313 | orchestrator | 2025-10-09 10:05:07.139319 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-10-09 10:05:07.139325 | orchestrator | Thursday 09 October 2025 10:04:55 +0000 (0:00:00.352) 0:05:18.503 ****** 2025-10-09 10:05:07.139331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:05:07.139338 | orchestrator | 2025-10-09 10:05:07.139343 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-10-09 10:05:07.139349 | orchestrator | Thursday 09 October 2025 10:04:55 +0000 (0:00:00.466) 0:05:18.969 ****** 2025-10-09 10:05:07.139355 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:07.139362 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:07.139368 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:07.139374 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:07.139380 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:07.139386 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:07.139392 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:07.139398 | orchestrator | 2025-10-09 10:05:07.139404 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-10-09 10:05:07.139411 | orchestrator | Thursday 09 October 2025 10:04:56 +0000 (0:00:00.840) 0:05:19.809 ****** 2025-10-09 10:05:07.139416 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:07.139423 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:07.139429 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:07.139445 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:07.139451 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:07.139457 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:07.139462 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:07.139468 | orchestrator | 2025-10-09 10:05:07.139474 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-10-09 10:05:07.139481 | orchestrator | Thursday 09 October 2025 10:05:00 +0000 (0:00:03.416) 0:05:23.225 ****** 2025-10-09 10:05:07.139487 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-10-09 10:05:07.139494 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-10-09 10:05:07.139500 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-10-09 10:05:07.139505 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:07.139511 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-10-09 10:05:07.139517 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-10-09 10:05:07.139523 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-10-09 10:05:07.139528 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:07.139534 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-10-09 10:05:07.139540 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-10-09 10:05:07.139545 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-10-09 10:05:07.139551 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:07.139557 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-10-09 10:05:07.139563 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-10-09 10:05:07.139568 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-10-09 10:05:07.139574 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:07.139582 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-10-09 10:05:07.139589 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-10-09 10:05:07.139594 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-10-09 10:05:07.139600 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:07.139606 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-10-09 10:05:07.139612 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-10-09 10:05:07.139617 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-10-09 10:05:07.139628 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:07.139633 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-10-09 10:05:07.139639 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-10-09 10:05:07.139645 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-10-09 10:05:07.139650 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:07.139656 | orchestrator | 2025-10-09 10:05:07.139662 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-10-09 10:05:07.139668 | orchestrator | Thursday 09 October 2025 10:05:00 +0000 (0:00:00.670) 0:05:23.896 ****** 2025-10-09 10:05:07.139674 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:07.139679 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:05:07.139685 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:05:07.139691 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:05:07.139697 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:05:07.139703 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:05:07.139708 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:05:07.139714 | orchestrator | 2025-10-09 10:05:07.139724 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-10-09 10:06:00.874785 | orchestrator | Thursday 09 October 2025 10:05:07 +0000 (0:00:06.261) 0:05:30.157 ****** 2025-10-09 10:06:00.874909 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.874925 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.874938 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.874949 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:00.874961 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.874971 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.874982 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.874993 | orchestrator | 2025-10-09 10:06:00.875006 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-10-09 10:06:00.875017 | orchestrator | Thursday 09 October 2025 10:05:08 +0000 (0:00:01.252) 0:05:31.409 ****** 2025-10-09 10:06:00.875028 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:00.875039 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.875049 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.875060 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.875071 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.875082 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.875093 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.875104 | orchestrator | 2025-10-09 10:06:00.875115 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-10-09 10:06:00.875126 | orchestrator | Thursday 09 October 2025 10:05:16 +0000 (0:00:07.729) 0:05:39.139 ****** 2025-10-09 10:06:00.875137 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.875148 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:00.875159 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.875169 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.875180 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.875190 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.875201 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.875212 | orchestrator | 2025-10-09 10:06:00.875223 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-10-09 10:06:00.875234 | orchestrator | Thursday 09 October 2025 10:05:19 +0000 (0:00:03.375) 0:05:42.515 ****** 2025-10-09 10:06:00.875245 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:00.875256 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.875266 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.875277 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.875289 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.875299 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.875310 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.875324 | orchestrator | 2025-10-09 10:06:00.875337 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-10-09 10:06:00.875374 | orchestrator | Thursday 09 October 2025 10:05:20 +0000 (0:00:01.428) 0:05:43.943 ****** 2025-10-09 10:06:00.875387 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:00.875400 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.875412 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.875452 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.875465 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.875478 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.875490 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.875502 | orchestrator | 2025-10-09 10:06:00.875515 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-10-09 10:06:00.875528 | orchestrator | Thursday 09 October 2025 10:05:22 +0000 (0:00:01.364) 0:05:45.308 ****** 2025-10-09 10:06:00.875540 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:00.875552 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:00.875565 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:00.875577 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:00.875590 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:00.875603 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:00.875615 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:00.875627 | orchestrator | 2025-10-09 10:06:00.875640 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-10-09 10:06:00.875653 | orchestrator | Thursday 09 October 2025 10:05:23 +0000 (0:00:00.828) 0:05:46.137 ****** 2025-10-09 10:06:00.875667 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:00.875678 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.875689 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.875699 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.875710 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.875721 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.875731 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.875742 | orchestrator | 2025-10-09 10:06:00.875753 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-10-09 10:06:00.875764 | orchestrator | Thursday 09 October 2025 10:05:32 +0000 (0:00:09.577) 0:05:55.714 ****** 2025-10-09 10:06:00.875775 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:00.875786 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.875797 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.875807 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.875818 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.875829 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.875840 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.875850 | orchestrator | 2025-10-09 10:06:00.875861 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-10-09 10:06:00.875872 | orchestrator | Thursday 09 October 2025 10:05:33 +0000 (0:00:00.930) 0:05:56.644 ****** 2025-10-09 10:06:00.875883 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:00.875894 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.875905 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.875915 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.875926 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.875936 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.875947 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.875958 | orchestrator | 2025-10-09 10:06:00.875969 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-10-09 10:06:00.875980 | orchestrator | Thursday 09 October 2025 10:05:42 +0000 (0:00:08.564) 0:06:05.209 ****** 2025-10-09 10:06:00.875990 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:00.876001 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.876012 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.876023 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.876034 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.876044 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.876080 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.876092 | orchestrator | 2025-10-09 10:06:00.876102 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-10-09 10:06:00.876113 | orchestrator | Thursday 09 October 2025 10:05:52 +0000 (0:00:10.656) 0:06:15.866 ****** 2025-10-09 10:06:00.876124 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-10-09 10:06:00.876136 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-10-09 10:06:00.876146 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-10-09 10:06:00.876157 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-10-09 10:06:00.876167 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-10-09 10:06:00.876178 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-10-09 10:06:00.876189 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-10-09 10:06:00.876199 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-10-09 10:06:00.876210 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-10-09 10:06:00.876221 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-10-09 10:06:00.876231 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-10-09 10:06:00.876242 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-10-09 10:06:00.876253 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-10-09 10:06:00.876263 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-10-09 10:06:00.876274 | orchestrator | 2025-10-09 10:06:00.876285 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-10-09 10:06:00.876296 | orchestrator | Thursday 09 October 2025 10:05:54 +0000 (0:00:01.254) 0:06:17.120 ****** 2025-10-09 10:06:00.876306 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:00.876317 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:00.876328 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:00.876339 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:00.876349 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:00.876360 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:00.876371 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:00.876381 | orchestrator | 2025-10-09 10:06:00.876392 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-10-09 10:06:00.876403 | orchestrator | Thursday 09 October 2025 10:05:54 +0000 (0:00:00.600) 0:06:17.720 ****** 2025-10-09 10:06:00.876414 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:00.876440 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:00.876451 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:00.876461 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:00.876472 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:00.876483 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:00.876493 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:00.876504 | orchestrator | 2025-10-09 10:06:00.876515 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-10-09 10:06:00.876577 | orchestrator | Thursday 09 October 2025 10:05:58 +0000 (0:00:04.223) 0:06:21.944 ****** 2025-10-09 10:06:00.876590 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:00.876601 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:00.876611 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:00.876622 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:00.876633 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:00.876643 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:00.876654 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:00.876664 | orchestrator | 2025-10-09 10:06:00.876676 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-10-09 10:06:00.876687 | orchestrator | Thursday 09 October 2025 10:05:59 +0000 (0:00:00.553) 0:06:22.497 ****** 2025-10-09 10:06:00.876698 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-10-09 10:06:00.876718 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-10-09 10:06:00.876729 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:00.876740 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-10-09 10:06:00.876751 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-10-09 10:06:00.876762 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:00.876773 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-10-09 10:06:00.876783 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-10-09 10:06:00.876799 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:00.876810 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-10-09 10:06:00.876821 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-10-09 10:06:00.876831 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:00.876842 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-10-09 10:06:00.876853 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-10-09 10:06:00.876863 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:00.876874 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-10-09 10:06:00.876885 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-10-09 10:06:00.876895 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:00.876906 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-10-09 10:06:00.876917 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-10-09 10:06:00.876928 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:00.876939 | orchestrator | 2025-10-09 10:06:00.876950 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-10-09 10:06:00.876961 | orchestrator | Thursday 09 October 2025 10:06:00 +0000 (0:00:00.804) 0:06:23.301 ****** 2025-10-09 10:06:00.876972 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:00.876983 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:00.876993 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:00.877004 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:00.877015 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:00.877026 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:00.877037 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:00.877048 | orchestrator | 2025-10-09 10:06:00.877067 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-10-09 10:06:22.312298 | orchestrator | Thursday 09 October 2025 10:06:00 +0000 (0:00:00.593) 0:06:23.895 ****** 2025-10-09 10:06:22.312412 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:22.312478 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:22.312489 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:22.312499 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:22.312509 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:22.312519 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:22.312528 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:22.312538 | orchestrator | 2025-10-09 10:06:22.312550 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-10-09 10:06:22.312560 | orchestrator | Thursday 09 October 2025 10:06:01 +0000 (0:00:00.535) 0:06:24.430 ****** 2025-10-09 10:06:22.312570 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:22.312580 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:22.312589 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:22.312599 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:22.312609 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:22.312618 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:22.312628 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:22.312638 | orchestrator | 2025-10-09 10:06:22.312648 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-10-09 10:06:22.312657 | orchestrator | Thursday 09 October 2025 10:06:01 +0000 (0:00:00.561) 0:06:24.992 ****** 2025-10-09 10:06:22.312690 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.312702 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:22.312712 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:22.312722 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:22.312731 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:22.312741 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:22.312750 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:22.312760 | orchestrator | 2025-10-09 10:06:22.312769 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-10-09 10:06:22.312779 | orchestrator | Thursday 09 October 2025 10:06:03 +0000 (0:00:01.713) 0:06:26.705 ****** 2025-10-09 10:06:22.312790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:06:22.312802 | orchestrator | 2025-10-09 10:06:22.312812 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-10-09 10:06:22.312822 | orchestrator | Thursday 09 October 2025 10:06:04 +0000 (0:00:01.107) 0:06:27.812 ****** 2025-10-09 10:06:22.312832 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.312843 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:22.312855 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:22.312866 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:22.312877 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:22.312888 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:22.312899 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:22.312910 | orchestrator | 2025-10-09 10:06:22.312921 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-10-09 10:06:22.312932 | orchestrator | Thursday 09 October 2025 10:06:05 +0000 (0:00:00.852) 0:06:28.665 ****** 2025-10-09 10:06:22.312943 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.312954 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:22.312965 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:22.312975 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:22.312986 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:22.312996 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:22.313007 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:22.313018 | orchestrator | 2025-10-09 10:06:22.313029 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-10-09 10:06:22.313040 | orchestrator | Thursday 09 October 2025 10:06:06 +0000 (0:00:00.894) 0:06:29.559 ****** 2025-10-09 10:06:22.313051 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.313062 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:22.313073 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:22.313084 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:22.313095 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:22.313105 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:22.313116 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:22.313127 | orchestrator | 2025-10-09 10:06:22.313138 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-10-09 10:06:22.313150 | orchestrator | Thursday 09 October 2025 10:06:08 +0000 (0:00:01.578) 0:06:31.137 ****** 2025-10-09 10:06:22.313161 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:22.313172 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:22.313183 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:22.313194 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:22.313205 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:22.313214 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:22.313224 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:22.313233 | orchestrator | 2025-10-09 10:06:22.313243 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-10-09 10:06:22.313253 | orchestrator | Thursday 09 October 2025 10:06:09 +0000 (0:00:01.422) 0:06:32.559 ****** 2025-10-09 10:06:22.313263 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.313273 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:22.313295 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:22.313305 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:22.313315 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:22.313324 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:22.313334 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:22.313343 | orchestrator | 2025-10-09 10:06:22.313353 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-10-09 10:06:22.313363 | orchestrator | Thursday 09 October 2025 10:06:10 +0000 (0:00:01.348) 0:06:33.908 ****** 2025-10-09 10:06:22.313373 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:22.313383 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:22.313392 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:22.313402 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:22.313411 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:22.313437 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:22.313446 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:22.313456 | orchestrator | 2025-10-09 10:06:22.313482 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-10-09 10:06:22.313492 | orchestrator | Thursday 09 October 2025 10:06:12 +0000 (0:00:01.467) 0:06:35.375 ****** 2025-10-09 10:06:22.313502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:06:22.313512 | orchestrator | 2025-10-09 10:06:22.313522 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-10-09 10:06:22.313532 | orchestrator | Thursday 09 October 2025 10:06:13 +0000 (0:00:01.089) 0:06:36.465 ****** 2025-10-09 10:06:22.313541 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:22.313551 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.313561 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:22.313570 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:22.313580 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:22.313589 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:22.313599 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:22.313608 | orchestrator | 2025-10-09 10:06:22.313618 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-10-09 10:06:22.313628 | orchestrator | Thursday 09 October 2025 10:06:14 +0000 (0:00:01.404) 0:06:37.870 ****** 2025-10-09 10:06:22.313638 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.313648 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:22.313657 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:22.313667 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:22.313676 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:22.313685 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:22.313695 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:22.313704 | orchestrator | 2025-10-09 10:06:22.313714 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-10-09 10:06:22.313724 | orchestrator | Thursday 09 October 2025 10:06:16 +0000 (0:00:01.188) 0:06:39.059 ****** 2025-10-09 10:06:22.313734 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.313743 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:22.313753 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:22.313762 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:22.313771 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:22.313781 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:22.313790 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:22.313800 | orchestrator | 2025-10-09 10:06:22.313809 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-10-09 10:06:22.313819 | orchestrator | Thursday 09 October 2025 10:06:17 +0000 (0:00:01.139) 0:06:40.198 ****** 2025-10-09 10:06:22.313829 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:22.313838 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:22.313848 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:22.313857 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:22.313874 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:22.313883 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:22.313893 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:22.313903 | orchestrator | 2025-10-09 10:06:22.313913 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-10-09 10:06:22.313922 | orchestrator | Thursday 09 October 2025 10:06:18 +0000 (0:00:01.113) 0:06:41.312 ****** 2025-10-09 10:06:22.313932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:06:22.313942 | orchestrator | 2025-10-09 10:06:22.313952 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:06:22.313962 | orchestrator | Thursday 09 October 2025 10:06:19 +0000 (0:00:01.170) 0:06:42.482 ****** 2025-10-09 10:06:22.313971 | orchestrator | 2025-10-09 10:06:22.313981 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:06:22.313991 | orchestrator | Thursday 09 October 2025 10:06:19 +0000 (0:00:00.040) 0:06:42.523 ****** 2025-10-09 10:06:22.314001 | orchestrator | 2025-10-09 10:06:22.314010 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:06:22.314079 | orchestrator | Thursday 09 October 2025 10:06:19 +0000 (0:00:00.048) 0:06:42.572 ****** 2025-10-09 10:06:22.314089 | orchestrator | 2025-10-09 10:06:22.314114 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:06:22.314124 | orchestrator | Thursday 09 October 2025 10:06:19 +0000 (0:00:00.043) 0:06:42.615 ****** 2025-10-09 10:06:22.314134 | orchestrator | 2025-10-09 10:06:22.314144 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:06:22.314154 | orchestrator | Thursday 09 October 2025 10:06:19 +0000 (0:00:00.045) 0:06:42.661 ****** 2025-10-09 10:06:22.314163 | orchestrator | 2025-10-09 10:06:22.314173 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:06:22.314183 | orchestrator | Thursday 09 October 2025 10:06:19 +0000 (0:00:00.064) 0:06:42.725 ****** 2025-10-09 10:06:22.314192 | orchestrator | 2025-10-09 10:06:22.314202 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:06:22.314212 | orchestrator | Thursday 09 October 2025 10:06:19 +0000 (0:00:00.040) 0:06:42.766 ****** 2025-10-09 10:06:22.314221 | orchestrator | 2025-10-09 10:06:22.314231 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-09 10:06:22.314241 | orchestrator | Thursday 09 October 2025 10:06:19 +0000 (0:00:00.042) 0:06:42.808 ****** 2025-10-09 10:06:22.314251 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:22.314260 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:22.314270 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:22.314280 | orchestrator | 2025-10-09 10:06:22.314289 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-10-09 10:06:22.314299 | orchestrator | Thursday 09 October 2025 10:06:20 +0000 (0:00:01.189) 0:06:43.998 ****** 2025-10-09 10:06:22.314309 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:22.314318 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:22.314328 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:22.314338 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:22.314347 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:22.314364 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:51.759774 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:51.759878 | orchestrator | 2025-10-09 10:06:51.759893 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-10-09 10:06:51.759905 | orchestrator | Thursday 09 October 2025 10:06:22 +0000 (0:00:01.332) 0:06:45.331 ****** 2025-10-09 10:06:51.759914 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:51.759924 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:51.759933 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:51.759942 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:51.759973 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:51.759982 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:51.759991 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:51.760000 | orchestrator | 2025-10-09 10:06:51.760008 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-10-09 10:06:51.760017 | orchestrator | Thursday 09 October 2025 10:06:24 +0000 (0:00:02.568) 0:06:47.899 ****** 2025-10-09 10:06:51.760027 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:51.760036 | orchestrator | 2025-10-09 10:06:51.760045 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-10-09 10:06:51.760054 | orchestrator | Thursday 09 October 2025 10:06:24 +0000 (0:00:00.119) 0:06:48.019 ****** 2025-10-09 10:06:51.760062 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.760072 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:51.760081 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:51.760090 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:51.760098 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:51.760107 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:51.760115 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:51.760124 | orchestrator | 2025-10-09 10:06:51.760133 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-10-09 10:06:51.760143 | orchestrator | Thursday 09 October 2025 10:06:26 +0000 (0:00:01.083) 0:06:49.102 ****** 2025-10-09 10:06:51.760151 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:51.760160 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:51.760168 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:51.760177 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:51.760185 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:51.760194 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:51.760203 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:51.760211 | orchestrator | 2025-10-09 10:06:51.760220 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-10-09 10:06:51.760229 | orchestrator | Thursday 09 October 2025 10:06:26 +0000 (0:00:00.614) 0:06:49.716 ****** 2025-10-09 10:06:51.760238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:06:51.760250 | orchestrator | 2025-10-09 10:06:51.760259 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-10-09 10:06:51.760268 | orchestrator | Thursday 09 October 2025 10:06:27 +0000 (0:00:01.118) 0:06:50.835 ****** 2025-10-09 10:06:51.760276 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.760285 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:51.760294 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:51.760303 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:51.760311 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:51.760320 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:51.760330 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:51.760340 | orchestrator | 2025-10-09 10:06:51.760350 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-10-09 10:06:51.760360 | orchestrator | Thursday 09 October 2025 10:06:28 +0000 (0:00:00.894) 0:06:51.729 ****** 2025-10-09 10:06:51.760371 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-10-09 10:06:51.760381 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-10-09 10:06:51.760391 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-10-09 10:06:51.760401 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-10-09 10:06:51.760433 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-10-09 10:06:51.760443 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-10-09 10:06:51.760467 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-10-09 10:06:51.760486 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-10-09 10:06:51.760496 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-10-09 10:06:51.760506 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-10-09 10:06:51.760516 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-10-09 10:06:51.760526 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-10-09 10:06:51.760535 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-10-09 10:06:51.760545 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-10-09 10:06:51.760555 | orchestrator | 2025-10-09 10:06:51.760565 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-10-09 10:06:51.760575 | orchestrator | Thursday 09 October 2025 10:06:31 +0000 (0:00:02.582) 0:06:54.312 ****** 2025-10-09 10:06:51.760585 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:51.760595 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:51.760605 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:51.760615 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:51.760625 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:51.760635 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:51.760645 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:51.760654 | orchestrator | 2025-10-09 10:06:51.760665 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-10-09 10:06:51.760675 | orchestrator | Thursday 09 October 2025 10:06:31 +0000 (0:00:00.530) 0:06:54.842 ****** 2025-10-09 10:06:51.760700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:06:51.760712 | orchestrator | 2025-10-09 10:06:51.760721 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-10-09 10:06:51.760730 | orchestrator | Thursday 09 October 2025 10:06:32 +0000 (0:00:01.053) 0:06:55.895 ****** 2025-10-09 10:06:51.760738 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.760747 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:51.760756 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:51.760765 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:51.760773 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:51.760782 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:51.760791 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:51.760800 | orchestrator | 2025-10-09 10:06:51.760808 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-10-09 10:06:51.760817 | orchestrator | Thursday 09 October 2025 10:06:33 +0000 (0:00:00.858) 0:06:56.754 ****** 2025-10-09 10:06:51.760826 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.760835 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:51.760844 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:51.760852 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:51.760861 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:51.760869 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:51.760878 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:51.760887 | orchestrator | 2025-10-09 10:06:51.760896 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-10-09 10:06:51.760905 | orchestrator | Thursday 09 October 2025 10:06:34 +0000 (0:00:00.863) 0:06:57.617 ****** 2025-10-09 10:06:51.760914 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:51.760922 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:51.760931 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:51.760940 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:51.760948 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:51.760957 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:51.760966 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:51.760975 | orchestrator | 2025-10-09 10:06:51.760984 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-10-09 10:06:51.760999 | orchestrator | Thursday 09 October 2025 10:06:35 +0000 (0:00:00.565) 0:06:58.183 ****** 2025-10-09 10:06:51.761008 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.761017 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:51.761026 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:51.761034 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:51.761043 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:51.761051 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:51.761060 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:51.761069 | orchestrator | 2025-10-09 10:06:51.761078 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-10-09 10:06:51.761087 | orchestrator | Thursday 09 October 2025 10:06:36 +0000 (0:00:01.743) 0:06:59.927 ****** 2025-10-09 10:06:51.761096 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:51.761104 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:51.761113 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:51.761122 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:51.761131 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:51.761139 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:51.761148 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:51.761157 | orchestrator | 2025-10-09 10:06:51.761165 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-10-09 10:06:51.761174 | orchestrator | Thursday 09 October 2025 10:06:37 +0000 (0:00:00.538) 0:07:00.466 ****** 2025-10-09 10:06:51.761183 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.761192 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:51.761201 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:51.761209 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:51.761218 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:51.761227 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:51.761235 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:51.761244 | orchestrator | 2025-10-09 10:06:51.761253 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-10-09 10:06:51.761262 | orchestrator | Thursday 09 October 2025 10:06:45 +0000 (0:00:07.585) 0:07:08.051 ****** 2025-10-09 10:06:51.761271 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:51.761279 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:51.761288 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:51.761297 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:51.761305 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:51.761314 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:51.761323 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.761331 | orchestrator | 2025-10-09 10:06:51.761341 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-10-09 10:06:51.761349 | orchestrator | Thursday 09 October 2025 10:06:47 +0000 (0:00:02.000) 0:07:10.052 ****** 2025-10-09 10:06:51.761358 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.761367 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:51.761376 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:51.761385 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:51.761393 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:51.761402 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:51.761426 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:51.761435 | orchestrator | 2025-10-09 10:06:51.761444 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-10-09 10:06:51.761452 | orchestrator | Thursday 09 October 2025 10:06:48 +0000 (0:00:01.915) 0:07:11.967 ****** 2025-10-09 10:06:51.761461 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.761470 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:51.761479 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:51.761487 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:51.761496 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:51.761505 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:51.761513 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:51.761528 | orchestrator | 2025-10-09 10:06:51.761537 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-09 10:06:51.761546 | orchestrator | Thursday 09 October 2025 10:06:50 +0000 (0:00:01.923) 0:07:13.890 ****** 2025-10-09 10:06:51.761555 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:51.761564 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:51.761572 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:51.761581 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:51.761595 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.058761 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.058885 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.058901 | orchestrator | 2025-10-09 10:07:24.058915 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-09 10:07:24.058929 | orchestrator | Thursday 09 October 2025 10:06:51 +0000 (0:00:00.892) 0:07:14.783 ****** 2025-10-09 10:07:24.058942 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:24.058954 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:24.058964 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:24.058975 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:24.058986 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:24.058997 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:24.059007 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:24.059018 | orchestrator | 2025-10-09 10:07:24.059029 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-10-09 10:07:24.059041 | orchestrator | Thursday 09 October 2025 10:06:52 +0000 (0:00:01.021) 0:07:15.804 ****** 2025-10-09 10:07:24.059052 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:24.059062 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:24.059073 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:24.059084 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:24.059094 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:24.059105 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:24.059115 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:24.059126 | orchestrator | 2025-10-09 10:07:24.059137 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-10-09 10:07:24.059148 | orchestrator | Thursday 09 October 2025 10:06:53 +0000 (0:00:00.563) 0:07:16.368 ****** 2025-10-09 10:07:24.059159 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:24.059170 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.059180 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.059191 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.059201 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.059212 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.059223 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.059233 | orchestrator | 2025-10-09 10:07:24.059244 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-10-09 10:07:24.059255 | orchestrator | Thursday 09 October 2025 10:06:53 +0000 (0:00:00.642) 0:07:17.011 ****** 2025-10-09 10:07:24.059266 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:24.059277 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.059287 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.059298 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.059308 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.059318 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.059329 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.059339 | orchestrator | 2025-10-09 10:07:24.059350 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-10-09 10:07:24.059361 | orchestrator | Thursday 09 October 2025 10:06:54 +0000 (0:00:00.563) 0:07:17.574 ****** 2025-10-09 10:07:24.059396 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:24.059429 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.059440 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.059451 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.059462 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.059472 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.059507 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.059518 | orchestrator | 2025-10-09 10:07:24.059529 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-10-09 10:07:24.059540 | orchestrator | Thursday 09 October 2025 10:06:55 +0000 (0:00:00.588) 0:07:18.163 ****** 2025-10-09 10:07:24.059551 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:24.059562 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.059572 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.059583 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.059594 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.059604 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.059615 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.059625 | orchestrator | 2025-10-09 10:07:24.059636 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-10-09 10:07:24.059665 | orchestrator | Thursday 09 October 2025 10:07:00 +0000 (0:00:05.823) 0:07:23.986 ****** 2025-10-09 10:07:24.059677 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:24.059688 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:24.059698 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:24.059709 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:24.059720 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:24.059735 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:24.059746 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:24.059756 | orchestrator | 2025-10-09 10:07:24.059767 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-10-09 10:07:24.059778 | orchestrator | Thursday 09 October 2025 10:07:01 +0000 (0:00:00.555) 0:07:24.542 ****** 2025-10-09 10:07:24.059791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:07:24.059805 | orchestrator | 2025-10-09 10:07:24.059816 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-10-09 10:07:24.059827 | orchestrator | Thursday 09 October 2025 10:07:02 +0000 (0:00:00.916) 0:07:25.459 ****** 2025-10-09 10:07:24.059838 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.059849 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:24.059860 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.059870 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.059881 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.059891 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.059902 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.059912 | orchestrator | 2025-10-09 10:07:24.059923 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-10-09 10:07:24.059934 | orchestrator | Thursday 09 October 2025 10:07:04 +0000 (0:00:02.027) 0:07:27.486 ****** 2025-10-09 10:07:24.059945 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:24.059956 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.059966 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.059977 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.059987 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.059998 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.060008 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.060019 | orchestrator | 2025-10-09 10:07:24.060052 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-10-09 10:07:24.060064 | orchestrator | Thursday 09 October 2025 10:07:05 +0000 (0:00:01.146) 0:07:28.633 ****** 2025-10-09 10:07:24.060075 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.060086 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:24.060097 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.060108 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.060118 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.060129 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.060140 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.060150 | orchestrator | 2025-10-09 10:07:24.060161 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-10-09 10:07:24.060181 | orchestrator | Thursday 09 October 2025 10:07:06 +0000 (0:00:00.850) 0:07:29.484 ****** 2025-10-09 10:07:24.060193 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:07:24.060206 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:07:24.060217 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:07:24.060227 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:07:24.060238 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:07:24.060249 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:07:24.060260 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:07:24.060271 | orchestrator | 2025-10-09 10:07:24.060282 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-10-09 10:07:24.060293 | orchestrator | Thursday 09 October 2025 10:07:08 +0000 (0:00:01.788) 0:07:31.273 ****** 2025-10-09 10:07:24.060304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:07:24.060315 | orchestrator | 2025-10-09 10:07:24.060326 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-10-09 10:07:24.060337 | orchestrator | Thursday 09 October 2025 10:07:09 +0000 (0:00:01.110) 0:07:32.383 ****** 2025-10-09 10:07:24.060348 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:24.060359 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:24.060369 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:24.060380 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:24.060391 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:24.060418 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:24.060429 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:24.060440 | orchestrator | 2025-10-09 10:07:24.060451 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-10-09 10:07:24.060461 | orchestrator | Thursday 09 October 2025 10:07:18 +0000 (0:00:09.526) 0:07:41.909 ****** 2025-10-09 10:07:24.060472 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:24.060483 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.060494 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.060504 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.060515 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.060525 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.060536 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.060546 | orchestrator | 2025-10-09 10:07:24.060563 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-10-09 10:07:24.060574 | orchestrator | Thursday 09 October 2025 10:07:20 +0000 (0:00:01.984) 0:07:43.894 ****** 2025-10-09 10:07:24.060585 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:24.060596 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:24.060606 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:24.060617 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:24.060627 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:24.060637 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:24.060648 | orchestrator | 2025-10-09 10:07:24.060659 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-10-09 10:07:24.060677 | orchestrator | Thursday 09 October 2025 10:07:22 +0000 (0:00:01.352) 0:07:45.247 ****** 2025-10-09 10:07:24.060688 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:24.060699 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:24.060710 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:24.060721 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:24.060731 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:24.060742 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:24.060752 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:24.060763 | orchestrator | 2025-10-09 10:07:24.060774 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-10-09 10:07:24.060785 | orchestrator | 2025-10-09 10:07:24.060795 | orchestrator | TASK [Include hardening role] ************************************************** 2025-10-09 10:07:24.060806 | orchestrator | Thursday 09 October 2025 10:07:23 +0000 (0:00:01.286) 0:07:46.533 ****** 2025-10-09 10:07:24.060817 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:24.060828 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:24.060838 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:24.060849 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:24.060860 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:24.060870 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:24.060888 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:51.963990 | orchestrator | 2025-10-09 10:07:51.964103 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-10-09 10:07:51.964121 | orchestrator | 2025-10-09 10:07:51.964133 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-10-09 10:07:51.964145 | orchestrator | Thursday 09 October 2025 10:07:24 +0000 (0:00:00.546) 0:07:47.079 ****** 2025-10-09 10:07:51.964157 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.964169 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.964180 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.964191 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.964202 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.964213 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.964224 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.964235 | orchestrator | 2025-10-09 10:07:51.964246 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-10-09 10:07:51.964258 | orchestrator | Thursday 09 October 2025 10:07:25 +0000 (0:00:01.383) 0:07:48.463 ****** 2025-10-09 10:07:51.964269 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:51.964281 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:51.964292 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:51.964303 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:51.964314 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:51.964325 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:51.964335 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:51.964346 | orchestrator | 2025-10-09 10:07:51.964357 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-10-09 10:07:51.964368 | orchestrator | Thursday 09 October 2025 10:07:27 +0000 (0:00:01.683) 0:07:50.147 ****** 2025-10-09 10:07:51.964380 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:51.964391 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:51.964429 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:51.964440 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:51.964451 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:51.964463 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:51.964474 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:51.964484 | orchestrator | 2025-10-09 10:07:51.964496 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-10-09 10:07:51.964507 | orchestrator | Thursday 09 October 2025 10:07:27 +0000 (0:00:00.539) 0:07:50.686 ****** 2025-10-09 10:07:51.964519 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:07:51.964557 | orchestrator | 2025-10-09 10:07:51.964570 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-10-09 10:07:51.964583 | orchestrator | Thursday 09 October 2025 10:07:28 +0000 (0:00:01.072) 0:07:51.758 ****** 2025-10-09 10:07:51.964597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:07:51.964612 | orchestrator | 2025-10-09 10:07:51.964625 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-10-09 10:07:51.964637 | orchestrator | Thursday 09 October 2025 10:07:29 +0000 (0:00:00.891) 0:07:52.650 ****** 2025-10-09 10:07:51.964650 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.964662 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.964674 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.964686 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.964699 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.964711 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.964724 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.964736 | orchestrator | 2025-10-09 10:07:51.964748 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-10-09 10:07:51.964760 | orchestrator | Thursday 09 October 2025 10:07:38 +0000 (0:00:08.598) 0:08:01.249 ****** 2025-10-09 10:07:51.964773 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.964785 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.964797 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.964810 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.964837 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.964850 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.964863 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.964875 | orchestrator | 2025-10-09 10:07:51.964886 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-10-09 10:07:51.964897 | orchestrator | Thursday 09 October 2025 10:07:39 +0000 (0:00:00.883) 0:08:02.132 ****** 2025-10-09 10:07:51.964908 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.964919 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.964929 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.964940 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.964950 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.964961 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.964971 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.964982 | orchestrator | 2025-10-09 10:07:51.964993 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-10-09 10:07:51.965003 | orchestrator | Thursday 09 October 2025 10:07:40 +0000 (0:00:01.623) 0:08:03.756 ****** 2025-10-09 10:07:51.965014 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.965025 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.965036 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.965046 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.965056 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.965067 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.965078 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.965088 | orchestrator | 2025-10-09 10:07:51.965099 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-10-09 10:07:51.965110 | orchestrator | Thursday 09 October 2025 10:07:42 +0000 (0:00:01.816) 0:08:05.573 ****** 2025-10-09 10:07:51.965121 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.965132 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.965143 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.965153 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.965180 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.965191 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.965202 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.965221 | orchestrator | 2025-10-09 10:07:51.965232 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-10-09 10:07:51.965243 | orchestrator | Thursday 09 October 2025 10:07:43 +0000 (0:00:01.363) 0:08:06.936 ****** 2025-10-09 10:07:51.965254 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.965265 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.965276 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.965287 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.965298 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.965309 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.965319 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.965330 | orchestrator | 2025-10-09 10:07:51.965341 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-10-09 10:07:51.965352 | orchestrator | 2025-10-09 10:07:51.965363 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-10-09 10:07:51.965374 | orchestrator | Thursday 09 October 2025 10:07:45 +0000 (0:00:01.381) 0:08:08.318 ****** 2025-10-09 10:07:51.965385 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:07:51.965413 | orchestrator | 2025-10-09 10:07:51.965425 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-10-09 10:07:51.965436 | orchestrator | Thursday 09 October 2025 10:07:46 +0000 (0:00:00.973) 0:08:09.291 ****** 2025-10-09 10:07:51.965447 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:51.965458 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:51.965469 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:51.965479 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:51.965490 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:51.965501 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:51.965512 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:51.965523 | orchestrator | 2025-10-09 10:07:51.965534 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-10-09 10:07:51.965545 | orchestrator | Thursday 09 October 2025 10:07:47 +0000 (0:00:00.863) 0:08:10.155 ****** 2025-10-09 10:07:51.965556 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.965567 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.965577 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.965588 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.965599 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.965609 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.965620 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.965631 | orchestrator | 2025-10-09 10:07:51.965642 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-10-09 10:07:51.965653 | orchestrator | Thursday 09 October 2025 10:07:48 +0000 (0:00:01.621) 0:08:11.776 ****** 2025-10-09 10:07:51.965664 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:07:51.965675 | orchestrator | 2025-10-09 10:07:51.965686 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-10-09 10:07:51.965697 | orchestrator | Thursday 09 October 2025 10:07:49 +0000 (0:00:00.936) 0:08:12.712 ****** 2025-10-09 10:07:51.965708 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:51.965719 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:07:51.965730 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:07:51.965740 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:07:51.965751 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:07:51.965762 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:07:51.965773 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:07:51.965784 | orchestrator | 2025-10-09 10:07:51.965795 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-10-09 10:07:51.965806 | orchestrator | Thursday 09 October 2025 10:07:50 +0000 (0:00:00.901) 0:08:13.613 ****** 2025-10-09 10:07:51.965817 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:51.965835 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:51.965846 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:51.965857 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:51.965868 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:51.965878 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:51.965894 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:51.965905 | orchestrator | 2025-10-09 10:07:51.965916 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:07:51.965928 | orchestrator | testbed-manager : ok=164  changed=38  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-10-09 10:07:51.965939 | orchestrator | testbed-node-0 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-10-09 10:07:51.965951 | orchestrator | testbed-node-1 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-10-09 10:07:51.965962 | orchestrator | testbed-node-2 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-10-09 10:07:51.965973 | orchestrator | testbed-node-3 : ok=171  changed=63  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2025-10-09 10:07:51.965984 | orchestrator | testbed-node-4 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-09 10:07:51.965994 | orchestrator | testbed-node-5 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-09 10:07:51.966005 | orchestrator | 2025-10-09 10:07:51.966072 | orchestrator | 2025-10-09 10:07:51.966092 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:07:52.466650 | orchestrator | Thursday 09 October 2025 10:07:51 +0000 (0:00:01.360) 0:08:14.974 ****** 2025-10-09 10:07:52.466737 | orchestrator | =============================================================================== 2025-10-09 10:07:52.466749 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.64s 2025-10-09 10:07:52.466761 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.68s 2025-10-09 10:07:52.466772 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.65s 2025-10-09 10:07:52.466783 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.11s 2025-10-09 10:07:52.466794 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.51s 2025-10-09 10:07:52.466807 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.57s 2025-10-09 10:07:52.466818 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.66s 2025-10-09 10:07:52.466829 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.58s 2025-10-09 10:07:52.466839 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.53s 2025-10-09 10:07:52.466850 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.60s 2025-10-09 10:07:52.466861 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.56s 2025-10-09 10:07:52.466872 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.96s 2025-10-09 10:07:52.466882 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.85s 2025-10-09 10:07:52.466893 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.73s 2025-10-09 10:07:52.466904 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.59s 2025-10-09 10:07:52.466915 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.47s 2025-10-09 10:07:52.466926 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.26s 2025-10-09 10:07:52.466959 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.17s 2025-10-09 10:07:52.466970 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.82s 2025-10-09 10:07:52.466981 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.78s 2025-10-09 10:07:52.799593 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-10-09 10:07:52.799658 | orchestrator | + osism apply network 2025-10-09 10:08:05.904475 | orchestrator | 2025-10-09 10:08:05 | INFO  | Task bfb76909-f0f2-4548-8f42-2b1c1c52c553 (network) was prepared for execution. 2025-10-09 10:08:05.904593 | orchestrator | 2025-10-09 10:08:05 | INFO  | It takes a moment until task bfb76909-f0f2-4548-8f42-2b1c1c52c553 (network) has been started and output is visible here. 2025-10-09 10:08:35.761854 | orchestrator | 2025-10-09 10:08:35.761973 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-10-09 10:08:35.761990 | orchestrator | 2025-10-09 10:08:35.762002 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-10-09 10:08:35.762075 | orchestrator | Thursday 09 October 2025 10:08:10 +0000 (0:00:00.287) 0:00:00.287 ****** 2025-10-09 10:08:35.762089 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:35.762131 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:35.762144 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:35.762155 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:35.762166 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:35.762177 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:35.762187 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:35.762198 | orchestrator | 2025-10-09 10:08:35.762209 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-10-09 10:08:35.762220 | orchestrator | Thursday 09 October 2025 10:08:11 +0000 (0:00:00.739) 0:00:01.026 ****** 2025-10-09 10:08:35.762247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:08:35.762261 | orchestrator | 2025-10-09 10:08:35.762272 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-10-09 10:08:35.762283 | orchestrator | Thursday 09 October 2025 10:08:12 +0000 (0:00:01.296) 0:00:02.323 ****** 2025-10-09 10:08:35.762294 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:35.762305 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:35.762316 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:35.762327 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:35.762337 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:35.762348 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:35.762359 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:35.762369 | orchestrator | 2025-10-09 10:08:35.762380 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-10-09 10:08:35.762425 | orchestrator | Thursday 09 October 2025 10:08:14 +0000 (0:00:01.987) 0:00:04.311 ****** 2025-10-09 10:08:35.762439 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:35.762452 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:35.762464 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:35.762476 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:35.762489 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:35.762501 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:35.762512 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:35.762525 | orchestrator | 2025-10-09 10:08:35.762537 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-10-09 10:08:35.762549 | orchestrator | Thursday 09 October 2025 10:08:16 +0000 (0:00:01.723) 0:00:06.034 ****** 2025-10-09 10:08:35.762562 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-10-09 10:08:35.762575 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-10-09 10:08:35.762587 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-10-09 10:08:35.762622 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-10-09 10:08:35.762634 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-10-09 10:08:35.762647 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-10-09 10:08:35.762659 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-10-09 10:08:35.762672 | orchestrator | 2025-10-09 10:08:35.762685 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-10-09 10:08:35.762698 | orchestrator | Thursday 09 October 2025 10:08:17 +0000 (0:00:00.987) 0:00:07.021 ****** 2025-10-09 10:08:35.762710 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:08:35.762723 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:08:35.762736 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:08:35.762748 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-09 10:08:35.762758 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-09 10:08:35.762769 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:08:35.762780 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:08:35.762790 | orchestrator | 2025-10-09 10:08:35.762801 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-10-09 10:08:35.762812 | orchestrator | Thursday 09 October 2025 10:08:20 +0000 (0:00:03.607) 0:00:10.629 ****** 2025-10-09 10:08:35.762823 | orchestrator | changed: [testbed-manager] 2025-10-09 10:08:35.762834 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:35.762844 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:35.762855 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:35.762866 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:35.762876 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:35.762888 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:35.762899 | orchestrator | 2025-10-09 10:08:35.762910 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-10-09 10:08:35.762921 | orchestrator | Thursday 09 October 2025 10:08:22 +0000 (0:00:01.499) 0:00:12.129 ****** 2025-10-09 10:08:35.762932 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:08:35.762942 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:08:35.762953 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-09 10:08:35.762964 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:08:35.762974 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-09 10:08:35.762985 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:08:35.762996 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:08:35.763006 | orchestrator | 2025-10-09 10:08:35.763017 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-10-09 10:08:35.763028 | orchestrator | Thursday 09 October 2025 10:08:24 +0000 (0:00:02.059) 0:00:14.188 ****** 2025-10-09 10:08:35.763039 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:35.763050 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:35.763060 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:35.763071 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:35.763082 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:35.763092 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:35.763103 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:35.763113 | orchestrator | 2025-10-09 10:08:35.763124 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-10-09 10:08:35.763152 | orchestrator | Thursday 09 October 2025 10:08:25 +0000 (0:00:01.160) 0:00:15.348 ****** 2025-10-09 10:08:35.763164 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:35.763174 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:35.763185 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:35.763196 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:35.763206 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:35.763217 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:35.763228 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:35.763238 | orchestrator | 2025-10-09 10:08:35.763249 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-10-09 10:08:35.763268 | orchestrator | Thursday 09 October 2025 10:08:26 +0000 (0:00:00.747) 0:00:16.096 ****** 2025-10-09 10:08:35.763278 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:35.763289 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:35.763300 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:35.763310 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:35.763321 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:35.763332 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:35.763343 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:35.763353 | orchestrator | 2025-10-09 10:08:35.763365 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-10-09 10:08:35.763376 | orchestrator | Thursday 09 October 2025 10:08:28 +0000 (0:00:02.204) 0:00:18.301 ****** 2025-10-09 10:08:35.763413 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:35.763425 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:35.763436 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:35.763447 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:35.763457 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:35.763468 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:35.763479 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-10-09 10:08:35.763492 | orchestrator | 2025-10-09 10:08:35.763503 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-10-09 10:08:35.763514 | orchestrator | Thursday 09 October 2025 10:08:29 +0000 (0:00:00.974) 0:00:19.276 ****** 2025-10-09 10:08:35.763524 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:35.763535 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:35.763546 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:35.763556 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:35.763567 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:35.763578 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:35.763588 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:35.763599 | orchestrator | 2025-10-09 10:08:35.763610 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-10-09 10:08:35.763621 | orchestrator | Thursday 09 October 2025 10:08:31 +0000 (0:00:01.716) 0:00:20.992 ****** 2025-10-09 10:08:35.763632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:08:35.763645 | orchestrator | 2025-10-09 10:08:35.763656 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-10-09 10:08:35.763667 | orchestrator | Thursday 09 October 2025 10:08:32 +0000 (0:00:01.311) 0:00:22.304 ****** 2025-10-09 10:08:35.763677 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:35.763688 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:35.763699 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:35.763710 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:35.763720 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:35.763731 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:35.763741 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:35.763752 | orchestrator | 2025-10-09 10:08:35.763763 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-10-09 10:08:35.763774 | orchestrator | Thursday 09 October 2025 10:08:33 +0000 (0:00:01.002) 0:00:23.307 ****** 2025-10-09 10:08:35.763785 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:35.763796 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:35.763806 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:35.763817 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:35.763827 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:35.763838 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:35.763848 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:35.763859 | orchestrator | 2025-10-09 10:08:35.763870 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-10-09 10:08:35.763888 | orchestrator | Thursday 09 October 2025 10:08:34 +0000 (0:00:00.936) 0:00:24.244 ****** 2025-10-09 10:08:35.763899 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:08:35.763910 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:08:35.763921 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:08:35.763932 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:08:35.763943 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:08:35.763954 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:08:35.763964 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:08:35.763975 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:08:35.763986 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:08:35.763996 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:08:35.764015 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:08:35.764027 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:08:35.764038 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:08:35.764049 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:08:35.764060 | orchestrator | 2025-10-09 10:08:35.764078 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-10-09 10:08:53.784349 | orchestrator | Thursday 09 October 2025 10:08:35 +0000 (0:00:01.212) 0:00:25.456 ****** 2025-10-09 10:08:53.784518 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:53.784535 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:53.784547 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:53.784558 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:53.784569 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:53.784579 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:53.784590 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:53.784601 | orchestrator | 2025-10-09 10:08:53.784613 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-10-09 10:08:53.784625 | orchestrator | Thursday 09 October 2025 10:08:36 +0000 (0:00:00.699) 0:00:26.155 ****** 2025-10-09 10:08:53.784654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-5, testbed-node-4, testbed-node-3, testbed-node-2 2025-10-09 10:08:53.784669 | orchestrator | 2025-10-09 10:08:53.784681 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-10-09 10:08:53.784692 | orchestrator | Thursday 09 October 2025 10:08:41 +0000 (0:00:04.908) 0:00:31.064 ****** 2025-10-09 10:08:53.784704 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784775 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.784787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.784821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.784850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.784879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.784893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.784910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.784923 | orchestrator | 2025-10-09 10:08:53.784936 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-10-09 10:08:53.784948 | orchestrator | Thursday 09 October 2025 10:08:47 +0000 (0:00:06.302) 0:00:37.366 ****** 2025-10-09 10:08:53.784961 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.784987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.785007 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.785020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.785033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.785047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:08:53.785059 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.785071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.785084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.785097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.785109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:08:53.785131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:09:00.538090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:09:00.538247 | orchestrator | 2025-10-09 10:09:00.538266 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-10-09 10:09:00.538279 | orchestrator | Thursday 09 October 2025 10:08:53 +0000 (0:00:06.111) 0:00:43.477 ****** 2025-10-09 10:09:00.538308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:09:00.538321 | orchestrator | 2025-10-09 10:09:00.538333 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-10-09 10:09:00.538367 | orchestrator | Thursday 09 October 2025 10:08:55 +0000 (0:00:01.380) 0:00:44.857 ****** 2025-10-09 10:09:00.538379 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:00.538455 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:00.538467 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:00.538477 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:00.538488 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:00.538499 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:00.538509 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:00.538520 | orchestrator | 2025-10-09 10:09:00.538534 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-10-09 10:09:00.538547 | orchestrator | Thursday 09 October 2025 10:08:56 +0000 (0:00:01.213) 0:00:46.071 ****** 2025-10-09 10:09:00.538560 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:09:00.538575 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:09:00.538588 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:09:00.538600 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:09:00.538613 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:09:00.538627 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:09:00.538640 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:09:00.538652 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:09:00.538665 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:09:00.538677 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:09:00.538689 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:09:00.538702 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:09:00.538714 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:09:00.538727 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:09:00.538739 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:09:00.538751 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:09:00.538764 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:09:00.538777 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:09:00.538789 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:09:00.538801 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:09:00.538813 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:09:00.538826 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:09:00.538838 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:09:00.538850 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:09:00.538863 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:09:00.538876 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:09:00.538887 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:09:00.538898 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:09:00.538909 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:09:00.538920 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:09:00.538930 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:09:00.538949 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:09:00.538960 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:09:00.538971 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:09:00.538982 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:09:00.538993 | orchestrator | 2025-10-09 10:09:00.539004 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-10-09 10:09:00.539034 | orchestrator | Thursday 09 October 2025 10:08:58 +0000 (0:00:02.251) 0:00:48.323 ****** 2025-10-09 10:09:00.539046 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:09:00.539057 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:09:00.539068 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:09:00.539079 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:09:00.539089 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:09:00.539100 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:09:00.539111 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:09:00.539121 | orchestrator | 2025-10-09 10:09:00.539132 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-10-09 10:09:00.539149 | orchestrator | Thursday 09 October 2025 10:08:59 +0000 (0:00:00.687) 0:00:49.010 ****** 2025-10-09 10:09:00.539161 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:09:00.539171 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:09:00.539182 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:09:00.539193 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:09:00.539203 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:09:00.539214 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:09:00.539225 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:09:00.539236 | orchestrator | 2025-10-09 10:09:00.539247 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:09:00.539259 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:09:00.539270 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:09:00.539281 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:09:00.539292 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:09:00.539303 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:09:00.539314 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:09:00.539325 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:09:00.539335 | orchestrator | 2025-10-09 10:09:00.539346 | orchestrator | 2025-10-09 10:09:00.539357 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:09:00.539368 | orchestrator | Thursday 09 October 2025 10:09:00 +0000 (0:00:00.775) 0:00:49.786 ****** 2025-10-09 10:09:00.539379 | orchestrator | =============================================================================== 2025-10-09 10:09:00.539417 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.30s 2025-10-09 10:09:00.539429 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.11s 2025-10-09 10:09:00.539440 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.91s 2025-10-09 10:09:00.539458 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.61s 2025-10-09 10:09:00.539469 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.25s 2025-10-09 10:09:00.539480 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2025-10-09 10:09:00.539490 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.06s 2025-10-09 10:09:00.539501 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.99s 2025-10-09 10:09:00.539512 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.72s 2025-10-09 10:09:00.539523 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2025-10-09 10:09:00.539534 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.50s 2025-10-09 10:09:00.539545 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.38s 2025-10-09 10:09:00.539555 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-10-09 10:09:00.539566 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.30s 2025-10-09 10:09:00.539577 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2025-10-09 10:09:00.539588 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.21s 2025-10-09 10:09:00.539599 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2025-10-09 10:09:00.539610 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2025-10-09 10:09:00.539620 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-10-09 10:09:00.539631 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.97s 2025-10-09 10:09:00.887305 | orchestrator | + osism apply wireguard 2025-10-09 10:09:13.036500 | orchestrator | 2025-10-09 10:09:13 | INFO  | Task 2cdcc4f3-11d3-4057-b639-f5c447630206 (wireguard) was prepared for execution. 2025-10-09 10:09:13.036617 | orchestrator | 2025-10-09 10:09:13 | INFO  | It takes a moment until task 2cdcc4f3-11d3-4057-b639-f5c447630206 (wireguard) has been started and output is visible here. 2025-10-09 10:09:34.428753 | orchestrator | 2025-10-09 10:09:34.428877 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-10-09 10:09:34.428895 | orchestrator | 2025-10-09 10:09:34.428907 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-10-09 10:09:34.428919 | orchestrator | Thursday 09 October 2025 10:09:17 +0000 (0:00:00.274) 0:00:00.274 ****** 2025-10-09 10:09:34.428931 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:34.428944 | orchestrator | 2025-10-09 10:09:34.428955 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-10-09 10:09:34.428985 | orchestrator | Thursday 09 October 2025 10:09:19 +0000 (0:00:01.692) 0:00:01.967 ****** 2025-10-09 10:09:34.428997 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:34.429009 | orchestrator | 2025-10-09 10:09:34.429020 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-10-09 10:09:34.429032 | orchestrator | Thursday 09 October 2025 10:09:26 +0000 (0:00:07.287) 0:00:09.254 ****** 2025-10-09 10:09:34.429043 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:34.429054 | orchestrator | 2025-10-09 10:09:34.429065 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-10-09 10:09:34.429077 | orchestrator | Thursday 09 October 2025 10:09:27 +0000 (0:00:00.576) 0:00:09.831 ****** 2025-10-09 10:09:34.429088 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:34.429099 | orchestrator | 2025-10-09 10:09:34.429110 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-10-09 10:09:34.429121 | orchestrator | Thursday 09 October 2025 10:09:27 +0000 (0:00:00.454) 0:00:10.286 ****** 2025-10-09 10:09:34.429132 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:34.429143 | orchestrator | 2025-10-09 10:09:34.429154 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-10-09 10:09:34.429188 | orchestrator | Thursday 09 October 2025 10:09:28 +0000 (0:00:00.686) 0:00:10.972 ****** 2025-10-09 10:09:34.429199 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:34.429211 | orchestrator | 2025-10-09 10:09:34.429222 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-10-09 10:09:34.429233 | orchestrator | Thursday 09 October 2025 10:09:28 +0000 (0:00:00.418) 0:00:11.391 ****** 2025-10-09 10:09:34.429244 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:34.429255 | orchestrator | 2025-10-09 10:09:34.429266 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-10-09 10:09:34.429277 | orchestrator | Thursday 09 October 2025 10:09:29 +0000 (0:00:00.427) 0:00:11.818 ****** 2025-10-09 10:09:34.429288 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:34.429299 | orchestrator | 2025-10-09 10:09:34.429310 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-10-09 10:09:34.429321 | orchestrator | Thursday 09 October 2025 10:09:30 +0000 (0:00:01.290) 0:00:13.109 ****** 2025-10-09 10:09:34.429332 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 10:09:34.429344 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:34.429355 | orchestrator | 2025-10-09 10:09:34.429366 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-10-09 10:09:34.429377 | orchestrator | Thursday 09 October 2025 10:09:31 +0000 (0:00:00.968) 0:00:14.077 ****** 2025-10-09 10:09:34.429414 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:34.429425 | orchestrator | 2025-10-09 10:09:34.429436 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-10-09 10:09:34.429447 | orchestrator | Thursday 09 October 2025 10:09:33 +0000 (0:00:01.736) 0:00:15.814 ****** 2025-10-09 10:09:34.429458 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:34.429469 | orchestrator | 2025-10-09 10:09:34.429480 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:09:34.429491 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:09:34.429504 | orchestrator | 2025-10-09 10:09:34.429515 | orchestrator | 2025-10-09 10:09:34.429526 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:09:34.429536 | orchestrator | Thursday 09 October 2025 10:09:34 +0000 (0:00:01.019) 0:00:16.833 ****** 2025-10-09 10:09:34.429547 | orchestrator | =============================================================================== 2025-10-09 10:09:34.429558 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.29s 2025-10-09 10:09:34.429569 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.74s 2025-10-09 10:09:34.429579 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.69s 2025-10-09 10:09:34.429590 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.29s 2025-10-09 10:09:34.429600 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.02s 2025-10-09 10:09:34.429611 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-10-09 10:09:34.429622 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.69s 2025-10-09 10:09:34.429632 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-10-09 10:09:34.429643 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-10-09 10:09:34.429654 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-10-09 10:09:34.429665 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2025-10-09 10:09:34.774662 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-10-09 10:09:34.820436 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-10-09 10:09:34.820484 | orchestrator | Dload Upload Total Spent Left Speed 2025-10-09 10:09:34.900509 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 174 0 --:--:-- --:--:-- --:--:-- 175 2025-10-09 10:09:34.921105 | orchestrator | + osism apply --environment custom workarounds 2025-10-09 10:09:36.954871 | orchestrator | 2025-10-09 10:09:36 | INFO  | Trying to run play workarounds in environment custom 2025-10-09 10:09:47.093605 | orchestrator | 2025-10-09 10:09:47 | INFO  | Task cf04b868-7d55-427d-bd4b-6e408b92fe32 (workarounds) was prepared for execution. 2025-10-09 10:09:47.093743 | orchestrator | 2025-10-09 10:09:47 | INFO  | It takes a moment until task cf04b868-7d55-427d-bd4b-6e408b92fe32 (workarounds) has been started and output is visible here. 2025-10-09 10:10:13.019710 | orchestrator | 2025-10-09 10:10:13.019833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:10:13.019849 | orchestrator | 2025-10-09 10:10:13.019862 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-10-09 10:10:13.019874 | orchestrator | Thursday 09 October 2025 10:09:51 +0000 (0:00:00.153) 0:00:00.153 ****** 2025-10-09 10:10:13.019887 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-10-09 10:10:13.019898 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-10-09 10:10:13.019909 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-10-09 10:10:13.019920 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-10-09 10:10:13.019931 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-10-09 10:10:13.019941 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-10-09 10:10:13.019952 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-10-09 10:10:13.019963 | orchestrator | 2025-10-09 10:10:13.019975 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-10-09 10:10:13.019985 | orchestrator | 2025-10-09 10:10:13.019996 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-10-09 10:10:13.020007 | orchestrator | Thursday 09 October 2025 10:09:52 +0000 (0:00:00.816) 0:00:00.969 ****** 2025-10-09 10:10:13.020019 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:13.020031 | orchestrator | 2025-10-09 10:10:13.020043 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-10-09 10:10:13.020054 | orchestrator | 2025-10-09 10:10:13.020065 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-10-09 10:10:13.020076 | orchestrator | Thursday 09 October 2025 10:09:54 +0000 (0:00:02.614) 0:00:03.584 ****** 2025-10-09 10:10:13.020087 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:13.020098 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:13.020109 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:13.020120 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:13.020131 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:13.020141 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:13.020152 | orchestrator | 2025-10-09 10:10:13.020163 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-10-09 10:10:13.020174 | orchestrator | 2025-10-09 10:10:13.020184 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-10-09 10:10:13.020195 | orchestrator | Thursday 09 October 2025 10:09:56 +0000 (0:00:01.849) 0:00:05.434 ****** 2025-10-09 10:10:13.020207 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:10:13.020220 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:10:13.020230 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:10:13.020241 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:10:13.020276 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:10:13.020289 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:10:13.020302 | orchestrator | 2025-10-09 10:10:13.020314 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-10-09 10:10:13.020326 | orchestrator | Thursday 09 October 2025 10:09:58 +0000 (0:00:01.590) 0:00:07.025 ****** 2025-10-09 10:10:13.020339 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:10:13.020352 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:10:13.020364 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:10:13.020415 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:10:13.020429 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:10:13.020441 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:10:13.020453 | orchestrator | 2025-10-09 10:10:13.020467 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-10-09 10:10:13.020479 | orchestrator | Thursday 09 October 2025 10:10:01 +0000 (0:00:03.710) 0:00:10.736 ****** 2025-10-09 10:10:13.020492 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:13.020504 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:13.020520 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:13.020533 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:13.020546 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:13.020558 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:13.020570 | orchestrator | 2025-10-09 10:10:13.020582 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-10-09 10:10:13.020595 | orchestrator | 2025-10-09 10:10:13.020607 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-10-09 10:10:13.020619 | orchestrator | Thursday 09 October 2025 10:10:02 +0000 (0:00:00.856) 0:00:11.593 ****** 2025-10-09 10:10:13.020629 | orchestrator | changed: [testbed-manager] 2025-10-09 10:10:13.020640 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:10:13.020651 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:10:13.020661 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:10:13.020672 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:10:13.020682 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:10:13.020693 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:10:13.020704 | orchestrator | 2025-10-09 10:10:13.020715 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-10-09 10:10:13.020726 | orchestrator | Thursday 09 October 2025 10:10:04 +0000 (0:00:01.727) 0:00:13.320 ****** 2025-10-09 10:10:13.020736 | orchestrator | changed: [testbed-manager] 2025-10-09 10:10:13.020747 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:10:13.020758 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:10:13.020768 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:10:13.020779 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:10:13.020790 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:10:13.020818 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:10:13.020829 | orchestrator | 2025-10-09 10:10:13.020840 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-10-09 10:10:13.020851 | orchestrator | Thursday 09 October 2025 10:10:06 +0000 (0:00:01.631) 0:00:14.952 ****** 2025-10-09 10:10:13.020862 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:13.020873 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:13.020883 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:13.020894 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:13.020905 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:13.020916 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:13.020926 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:13.020937 | orchestrator | 2025-10-09 10:10:13.020948 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-10-09 10:10:13.020959 | orchestrator | Thursday 09 October 2025 10:10:07 +0000 (0:00:01.575) 0:00:16.527 ****** 2025-10-09 10:10:13.020970 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:10:13.020989 | orchestrator | changed: [testbed-manager] 2025-10-09 10:10:13.021000 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:10:13.021011 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:10:13.021022 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:10:13.021033 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:10:13.021043 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:10:13.021054 | orchestrator | 2025-10-09 10:10:13.021065 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-10-09 10:10:13.021075 | orchestrator | Thursday 09 October 2025 10:10:09 +0000 (0:00:01.857) 0:00:18.384 ****** 2025-10-09 10:10:13.021086 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:10:13.021097 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:13.021108 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:13.021118 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:13.021129 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:13.021140 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:13.021150 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:13.021161 | orchestrator | 2025-10-09 10:10:13.021189 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-10-09 10:10:13.021201 | orchestrator | 2025-10-09 10:10:13.021212 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-10-09 10:10:13.021223 | orchestrator | Thursday 09 October 2025 10:10:10 +0000 (0:00:00.694) 0:00:19.078 ****** 2025-10-09 10:10:13.021234 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:13.021245 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:13.021255 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:13.021266 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:13.021277 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:13.021287 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:13.021298 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:13.021308 | orchestrator | 2025-10-09 10:10:13.021319 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:10:13.021331 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:10:13.021343 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:13.021354 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:13.021365 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:13.021395 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:13.021407 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:13.021417 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:13.021428 | orchestrator | 2025-10-09 10:10:13.021439 | orchestrator | 2025-10-09 10:10:13.021450 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:10:13.021461 | orchestrator | Thursday 09 October 2025 10:10:12 +0000 (0:00:02.738) 0:00:21.817 ****** 2025-10-09 10:10:13.021472 | orchestrator | =============================================================================== 2025-10-09 10:10:13.021483 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2025-10-09 10:10:13.021493 | orchestrator | Install python3-docker -------------------------------------------------- 2.74s 2025-10-09 10:10:13.021504 | orchestrator | Apply netplan configuration --------------------------------------------- 2.61s 2025-10-09 10:10:13.021522 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.86s 2025-10-09 10:10:13.021532 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2025-10-09 10:10:13.021543 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.73s 2025-10-09 10:10:13.021554 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-10-09 10:10:13.021565 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.59s 2025-10-09 10:10:13.021575 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.58s 2025-10-09 10:10:13.021586 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.86s 2025-10-09 10:10:13.021601 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2025-10-09 10:10:13.021619 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.69s 2025-10-09 10:10:13.805850 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-10-09 10:10:26.031779 | orchestrator | 2025-10-09 10:10:26 | INFO  | Task 3109fb4f-d12d-4478-bd31-36cadfbca0dd (reboot) was prepared for execution. 2025-10-09 10:10:26.031894 | orchestrator | 2025-10-09 10:10:26 | INFO  | It takes a moment until task 3109fb4f-d12d-4478-bd31-36cadfbca0dd (reboot) has been started and output is visible here. 2025-10-09 10:10:36.555724 | orchestrator | 2025-10-09 10:10:36.555839 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:10:36.555855 | orchestrator | 2025-10-09 10:10:36.555867 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:10:36.555879 | orchestrator | Thursday 09 October 2025 10:10:30 +0000 (0:00:00.259) 0:00:00.259 ****** 2025-10-09 10:10:36.555891 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:36.555903 | orchestrator | 2025-10-09 10:10:36.555914 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:10:36.555925 | orchestrator | Thursday 09 October 2025 10:10:30 +0000 (0:00:00.115) 0:00:00.374 ****** 2025-10-09 10:10:36.555936 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:10:36.555947 | orchestrator | 2025-10-09 10:10:36.555958 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:10:36.555969 | orchestrator | Thursday 09 October 2025 10:10:31 +0000 (0:00:00.959) 0:00:01.334 ****** 2025-10-09 10:10:36.555980 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:36.555991 | orchestrator | 2025-10-09 10:10:36.556002 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:10:36.556015 | orchestrator | 2025-10-09 10:10:36.556033 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:10:36.556052 | orchestrator | Thursday 09 October 2025 10:10:31 +0000 (0:00:00.119) 0:00:01.453 ****** 2025-10-09 10:10:36.556069 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:36.556088 | orchestrator | 2025-10-09 10:10:36.556105 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:10:36.556123 | orchestrator | Thursday 09 October 2025 10:10:31 +0000 (0:00:00.109) 0:00:01.563 ****** 2025-10-09 10:10:36.556141 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:10:36.556159 | orchestrator | 2025-10-09 10:10:36.556179 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:10:36.556191 | orchestrator | Thursday 09 October 2025 10:10:32 +0000 (0:00:00.686) 0:00:02.250 ****** 2025-10-09 10:10:36.556201 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:36.556213 | orchestrator | 2025-10-09 10:10:36.556224 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:10:36.556235 | orchestrator | 2025-10-09 10:10:36.556246 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:10:36.556257 | orchestrator | Thursday 09 October 2025 10:10:32 +0000 (0:00:00.115) 0:00:02.365 ****** 2025-10-09 10:10:36.556292 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:36.556304 | orchestrator | 2025-10-09 10:10:36.556315 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:10:36.556326 | orchestrator | Thursday 09 October 2025 10:10:32 +0000 (0:00:00.226) 0:00:02.592 ****** 2025-10-09 10:10:36.556336 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:10:36.556347 | orchestrator | 2025-10-09 10:10:36.556358 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:10:36.556369 | orchestrator | Thursday 09 October 2025 10:10:33 +0000 (0:00:00.654) 0:00:03.247 ****** 2025-10-09 10:10:36.556415 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:36.556427 | orchestrator | 2025-10-09 10:10:36.556438 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:10:36.556449 | orchestrator | 2025-10-09 10:10:36.556460 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:10:36.556471 | orchestrator | Thursday 09 October 2025 10:10:33 +0000 (0:00:00.131) 0:00:03.379 ****** 2025-10-09 10:10:36.556482 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:36.556492 | orchestrator | 2025-10-09 10:10:36.556503 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:10:36.556514 | orchestrator | Thursday 09 October 2025 10:10:33 +0000 (0:00:00.125) 0:00:03.504 ****** 2025-10-09 10:10:36.556525 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:10:36.556536 | orchestrator | 2025-10-09 10:10:36.556547 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:10:36.556558 | orchestrator | Thursday 09 October 2025 10:10:34 +0000 (0:00:00.678) 0:00:04.183 ****** 2025-10-09 10:10:36.556568 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:36.556579 | orchestrator | 2025-10-09 10:10:36.556590 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:10:36.556601 | orchestrator | 2025-10-09 10:10:36.556612 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:10:36.556622 | orchestrator | Thursday 09 October 2025 10:10:34 +0000 (0:00:00.122) 0:00:04.305 ****** 2025-10-09 10:10:36.556633 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:36.556644 | orchestrator | 2025-10-09 10:10:36.556655 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:10:36.556665 | orchestrator | Thursday 09 October 2025 10:10:34 +0000 (0:00:00.144) 0:00:04.449 ****** 2025-10-09 10:10:36.556676 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:10:36.556689 | orchestrator | 2025-10-09 10:10:36.556707 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:10:36.556726 | orchestrator | Thursday 09 October 2025 10:10:35 +0000 (0:00:00.673) 0:00:05.123 ****** 2025-10-09 10:10:36.556743 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:36.556760 | orchestrator | 2025-10-09 10:10:36.556778 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:10:36.556796 | orchestrator | 2025-10-09 10:10:36.556815 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:10:36.556853 | orchestrator | Thursday 09 October 2025 10:10:35 +0000 (0:00:00.120) 0:00:05.244 ****** 2025-10-09 10:10:36.556870 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:36.556882 | orchestrator | 2025-10-09 10:10:36.556893 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:10:36.556904 | orchestrator | Thursday 09 October 2025 10:10:35 +0000 (0:00:00.112) 0:00:05.356 ****** 2025-10-09 10:10:36.556915 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:10:36.556926 | orchestrator | 2025-10-09 10:10:36.556937 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:10:36.556948 | orchestrator | Thursday 09 October 2025 10:10:36 +0000 (0:00:00.700) 0:00:06.056 ****** 2025-10-09 10:10:36.556989 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:36.557008 | orchestrator | 2025-10-09 10:10:36.557025 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:10:36.557057 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:36.557078 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:36.557097 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:36.557111 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:36.557122 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:36.557133 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:10:36.557144 | orchestrator | 2025-10-09 10:10:36.557155 | orchestrator | 2025-10-09 10:10:36.557166 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:10:36.557177 | orchestrator | Thursday 09 October 2025 10:10:36 +0000 (0:00:00.039) 0:00:06.096 ****** 2025-10-09 10:10:36.557187 | orchestrator | =============================================================================== 2025-10-09 10:10:36.557198 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.35s 2025-10-09 10:10:36.557209 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.83s 2025-10-09 10:10:36.557221 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2025-10-09 10:10:36.906284 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-10-09 10:10:49.021850 | orchestrator | 2025-10-09 10:10:49 | INFO  | Task a195d040-ac8c-4228-83b8-1a2562d5b1df (wait-for-connection) was prepared for execution. 2025-10-09 10:10:49.021958 | orchestrator | 2025-10-09 10:10:49 | INFO  | It takes a moment until task a195d040-ac8c-4228-83b8-1a2562d5b1df (wait-for-connection) has been started and output is visible here. 2025-10-09 10:11:05.431993 | orchestrator | 2025-10-09 10:11:05.432130 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-10-09 10:11:05.432160 | orchestrator | 2025-10-09 10:11:05.432173 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-10-09 10:11:05.432185 | orchestrator | Thursday 09 October 2025 10:10:53 +0000 (0:00:00.250) 0:00:00.250 ****** 2025-10-09 10:11:05.432197 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:11:05.432209 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:11:05.432221 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:11:05.432232 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:11:05.432243 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:11:05.432253 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:11:05.432264 | orchestrator | 2025-10-09 10:11:05.432276 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:11:05.432288 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:11:05.432301 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:11:05.432312 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:11:05.432323 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:11:05.432334 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:11:05.432405 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:11:05.432418 | orchestrator | 2025-10-09 10:11:05.432430 | orchestrator | 2025-10-09 10:11:05.432440 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:11:05.432452 | orchestrator | Thursday 09 October 2025 10:11:05 +0000 (0:00:11.572) 0:00:11.822 ****** 2025-10-09 10:11:05.432462 | orchestrator | =============================================================================== 2025-10-09 10:11:05.432488 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2025-10-09 10:11:05.765202 | orchestrator | + osism apply hddtemp 2025-10-09 10:11:18.064083 | orchestrator | 2025-10-09 10:11:18 | INFO  | Task 40f058b4-1d9a-4962-9ffb-8d5123b90c67 (hddtemp) was prepared for execution. 2025-10-09 10:11:18.064183 | orchestrator | 2025-10-09 10:11:18 | INFO  | It takes a moment until task 40f058b4-1d9a-4962-9ffb-8d5123b90c67 (hddtemp) has been started and output is visible here. 2025-10-09 10:11:46.863884 | orchestrator | 2025-10-09 10:11:46.864020 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-10-09 10:11:46.864077 | orchestrator | 2025-10-09 10:11:46.864092 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-10-09 10:11:46.864104 | orchestrator | Thursday 09 October 2025 10:11:22 +0000 (0:00:00.306) 0:00:00.306 ****** 2025-10-09 10:11:46.864115 | orchestrator | ok: [testbed-manager] 2025-10-09 10:11:46.864128 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:11:46.864139 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:11:46.864150 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:11:46.864160 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:11:46.864171 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:11:46.864182 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:11:46.864193 | orchestrator | 2025-10-09 10:11:46.864204 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-10-09 10:11:46.864215 | orchestrator | Thursday 09 October 2025 10:11:23 +0000 (0:00:00.737) 0:00:01.043 ****** 2025-10-09 10:11:46.864227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:11:46.864240 | orchestrator | 2025-10-09 10:11:46.864252 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-10-09 10:11:46.864263 | orchestrator | Thursday 09 October 2025 10:11:24 +0000 (0:00:01.299) 0:00:02.343 ****** 2025-10-09 10:11:46.864274 | orchestrator | ok: [testbed-manager] 2025-10-09 10:11:46.864285 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:11:46.864295 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:11:46.864306 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:11:46.864317 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:11:46.864328 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:11:46.864339 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:11:46.864350 | orchestrator | 2025-10-09 10:11:46.864361 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-10-09 10:11:46.864402 | orchestrator | Thursday 09 October 2025 10:11:26 +0000 (0:00:02.018) 0:00:04.361 ****** 2025-10-09 10:11:46.864414 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:46.864426 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:11:46.864440 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:11:46.864452 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:11:46.864464 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:11:46.864476 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:11:46.864488 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:11:46.864500 | orchestrator | 2025-10-09 10:11:46.864513 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-10-09 10:11:46.864526 | orchestrator | Thursday 09 October 2025 10:11:27 +0000 (0:00:01.262) 0:00:05.624 ****** 2025-10-09 10:11:46.864563 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:11:46.864576 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:11:46.864589 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:11:46.864601 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:11:46.864613 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:11:46.864625 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:11:46.864637 | orchestrator | ok: [testbed-manager] 2025-10-09 10:11:46.864649 | orchestrator | 2025-10-09 10:11:46.864662 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-10-09 10:11:46.864674 | orchestrator | Thursday 09 October 2025 10:11:29 +0000 (0:00:01.254) 0:00:06.878 ****** 2025-10-09 10:11:46.864687 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:11:46.864699 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:11:46.864711 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:46.864723 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:11:46.864735 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:11:46.864748 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:11:46.864760 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:11:46.864772 | orchestrator | 2025-10-09 10:11:46.864785 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-10-09 10:11:46.864797 | orchestrator | Thursday 09 October 2025 10:11:29 +0000 (0:00:00.898) 0:00:07.777 ****** 2025-10-09 10:11:46.864808 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:46.864819 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:11:46.864830 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:11:46.864841 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:11:46.864851 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:11:46.864862 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:11:46.864873 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:11:46.864884 | orchestrator | 2025-10-09 10:11:46.864895 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-10-09 10:11:46.864906 | orchestrator | Thursday 09 October 2025 10:11:42 +0000 (0:00:12.969) 0:00:20.747 ****** 2025-10-09 10:11:46.864917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:11:46.864929 | orchestrator | 2025-10-09 10:11:46.864940 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-10-09 10:11:46.864951 | orchestrator | Thursday 09 October 2025 10:11:44 +0000 (0:00:01.511) 0:00:22.259 ****** 2025-10-09 10:11:46.864962 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:46.864972 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:11:46.864983 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:11:46.864994 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:11:46.865005 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:11:46.865016 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:11:46.865041 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:11:46.865053 | orchestrator | 2025-10-09 10:11:46.865064 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:11:46.865076 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:11:46.865108 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:11:46.865120 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:11:46.865132 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:11:46.865143 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:11:46.865163 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:11:46.865174 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:11:46.865185 | orchestrator | 2025-10-09 10:11:46.865197 | orchestrator | 2025-10-09 10:11:46.865208 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:11:46.865220 | orchestrator | Thursday 09 October 2025 10:11:46 +0000 (0:00:02.004) 0:00:24.263 ****** 2025-10-09 10:11:46.865232 | orchestrator | =============================================================================== 2025-10-09 10:11:46.865244 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.97s 2025-10-09 10:11:46.865256 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.02s 2025-10-09 10:11:46.865267 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.00s 2025-10-09 10:11:46.865279 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.51s 2025-10-09 10:11:46.865291 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.30s 2025-10-09 10:11:46.865303 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.26s 2025-10-09 10:11:46.865315 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.25s 2025-10-09 10:11:46.865326 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.90s 2025-10-09 10:11:46.865338 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2025-10-09 10:11:47.221504 | orchestrator | ++ semver 9.4.0 7.1.1 2025-10-09 10:11:47.268872 | orchestrator | + [[ 1 -ge 0 ]] 2025-10-09 10:11:47.268901 | orchestrator | + sudo systemctl restart manager.service 2025-10-09 10:12:01.014807 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-09 10:12:01.014924 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-10-09 10:12:01.014940 | orchestrator | + local max_attempts=60 2025-10-09 10:12:01.014955 | orchestrator | + local name=ceph-ansible 2025-10-09 10:12:01.014967 | orchestrator | + local attempt_num=1 2025-10-09 10:12:01.014978 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:01.053469 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:01.053537 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:01.053551 | orchestrator | + sleep 5 2025-10-09 10:12:06.058634 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:06.095454 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:06.095516 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:06.095528 | orchestrator | + sleep 5 2025-10-09 10:12:11.098602 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:11.133017 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:11.133083 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:11.133097 | orchestrator | + sleep 5 2025-10-09 10:12:16.136188 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:16.180396 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:16.180462 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:16.180477 | orchestrator | + sleep 5 2025-10-09 10:12:21.186271 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:21.229605 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:21.229674 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:21.229688 | orchestrator | + sleep 5 2025-10-09 10:12:26.235764 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:26.276430 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:26.276506 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:26.276520 | orchestrator | + sleep 5 2025-10-09 10:12:31.282338 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:31.322444 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:31.322544 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:31.322558 | orchestrator | + sleep 5 2025-10-09 10:12:36.326686 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:36.360868 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:36.360907 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:36.360919 | orchestrator | + sleep 5 2025-10-09 10:12:41.366325 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:41.421705 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:41.422078 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:41.422105 | orchestrator | + sleep 5 2025-10-09 10:12:46.428376 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:46.461558 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:46.461611 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:46.461625 | orchestrator | + sleep 5 2025-10-09 10:12:51.467114 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:51.513990 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:51.514086 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:51.514102 | orchestrator | + sleep 5 2025-10-09 10:12:56.518743 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:12:56.560209 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:12:56.560273 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:12:56.560287 | orchestrator | + sleep 5 2025-10-09 10:13:01.564707 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:13:01.602325 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:13:01.602414 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:13:01.602429 | orchestrator | + sleep 5 2025-10-09 10:13:06.607487 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:13:06.648098 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:13:06.648144 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-10-09 10:13:06.648157 | orchestrator | + local max_attempts=60 2025-10-09 10:13:06.648170 | orchestrator | + local name=kolla-ansible 2025-10-09 10:13:06.648181 | orchestrator | + local attempt_num=1 2025-10-09 10:13:06.648839 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-10-09 10:13:06.684618 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:13:06.684661 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-10-09 10:13:06.684673 | orchestrator | + local max_attempts=60 2025-10-09 10:13:06.684685 | orchestrator | + local name=osism-ansible 2025-10-09 10:13:06.684696 | orchestrator | + local attempt_num=1 2025-10-09 10:13:06.685785 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-10-09 10:13:06.721271 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:13:06.721297 | orchestrator | + [[ true == \t\r\u\e ]] 2025-10-09 10:13:06.721309 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-10-09 10:13:06.897134 | orchestrator | ARA in ceph-ansible already disabled. 2025-10-09 10:13:07.052589 | orchestrator | ARA in kolla-ansible already disabled. 2025-10-09 10:13:07.220129 | orchestrator | ARA in osism-ansible already disabled. 2025-10-09 10:13:07.397458 | orchestrator | ARA in osism-kubernetes already disabled. 2025-10-09 10:13:07.398135 | orchestrator | + osism apply gather-facts 2025-10-09 10:13:19.594602 | orchestrator | 2025-10-09 10:13:19 | INFO  | Task 368eaf8c-599d-49eb-af1f-280a32f65697 (gather-facts) was prepared for execution. 2025-10-09 10:13:19.594713 | orchestrator | 2025-10-09 10:13:19 | INFO  | It takes a moment until task 368eaf8c-599d-49eb-af1f-280a32f65697 (gather-facts) has been started and output is visible here. 2025-10-09 10:13:33.949460 | orchestrator | 2025-10-09 10:13:33.949584 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:13:33.949601 | orchestrator | 2025-10-09 10:13:33.949613 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:13:33.949625 | orchestrator | Thursday 09 October 2025 10:13:23 +0000 (0:00:00.235) 0:00:00.235 ****** 2025-10-09 10:13:33.949636 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:13:33.949648 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:13:33.949660 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:13:33.949698 | orchestrator | ok: [testbed-manager] 2025-10-09 10:13:33.949709 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:13:33.949720 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:13:33.949731 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:13:33.949741 | orchestrator | 2025-10-09 10:13:33.949752 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-09 10:13:33.949763 | orchestrator | 2025-10-09 10:13:33.949774 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-09 10:13:33.949785 | orchestrator | Thursday 09 October 2025 10:13:32 +0000 (0:00:08.975) 0:00:09.210 ****** 2025-10-09 10:13:33.949797 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:13:33.949809 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:13:33.949820 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:13:33.949830 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:13:33.949841 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:13:33.949852 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:13:33.949862 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:13:33.949873 | orchestrator | 2025-10-09 10:13:33.949884 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:13:33.949895 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:33.949908 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:33.949918 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:33.949929 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:33.949940 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:33.949951 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:33.949964 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:33.949976 | orchestrator | 2025-10-09 10:13:33.949988 | orchestrator | 2025-10-09 10:13:33.950000 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:13:33.950013 | orchestrator | Thursday 09 October 2025 10:13:33 +0000 (0:00:00.594) 0:00:09.805 ****** 2025-10-09 10:13:33.950080 | orchestrator | =============================================================================== 2025-10-09 10:13:33.950093 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.98s 2025-10-09 10:13:33.950105 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-10-09 10:13:34.314610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-10-09 10:13:34.331101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-10-09 10:13:34.348888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-10-09 10:13:34.364022 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-10-09 10:13:34.380101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-10-09 10:13:34.393539 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-10-09 10:13:34.413953 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-10-09 10:13:34.433323 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-10-09 10:13:34.451268 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-10-09 10:13:34.473157 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-10-09 10:13:34.492454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-10-09 10:13:34.507094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-10-09 10:13:34.530687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-10-09 10:13:34.551774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-10-09 10:13:34.573405 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-10-09 10:13:34.593412 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-10-09 10:13:34.611572 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-10-09 10:13:34.630398 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-10-09 10:13:34.653772 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-10-09 10:13:34.678471 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-10-09 10:13:34.699135 | orchestrator | + [[ false == \t\r\u\e ]] 2025-10-09 10:13:34.997124 | orchestrator | ok: Runtime: 0:24:14.142289 2025-10-09 10:13:35.088977 | 2025-10-09 10:13:35.089099 | TASK [Deploy services] 2025-10-09 10:13:35.619700 | orchestrator | skipping: Conditional result was False 2025-10-09 10:13:35.636324 | 2025-10-09 10:13:35.636490 | TASK [Deploy in a nutshell] 2025-10-09 10:13:36.315109 | orchestrator | 2025-10-09 10:13:36.315276 | orchestrator | # PULL IMAGES 2025-10-09 10:13:36.315300 | orchestrator | 2025-10-09 10:13:36.315315 | orchestrator | + set -e 2025-10-09 10:13:36.315332 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 10:13:36.315420 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 10:13:36.315435 | orchestrator | ++ INTERACTIVE=false 2025-10-09 10:13:36.315479 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 10:13:36.315500 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 10:13:36.315514 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 10:13:36.315525 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 10:13:36.315543 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 10:13:36.315554 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 10:13:36.315572 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 10:13:36.315583 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 10:13:36.315601 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 10:13:36.315612 | orchestrator | ++ export MANAGER_VERSION=9.4.0 2025-10-09 10:13:36.315626 | orchestrator | ++ MANAGER_VERSION=9.4.0 2025-10-09 10:13:36.315637 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 10:13:36.315649 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 10:13:36.315660 | orchestrator | ++ export ARA=false 2025-10-09 10:13:36.315671 | orchestrator | ++ ARA=false 2025-10-09 10:13:36.315681 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 10:13:36.315692 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 10:13:36.315703 | orchestrator | ++ export TEMPEST=false 2025-10-09 10:13:36.315713 | orchestrator | ++ TEMPEST=false 2025-10-09 10:13:36.315724 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 10:13:36.315735 | orchestrator | ++ IS_ZUUL=true 2025-10-09 10:13:36.315745 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 10:13:36.315756 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 10:13:36.315767 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 10:13:36.315778 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 10:13:36.315788 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 10:13:36.315800 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 10:13:36.315810 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 10:13:36.315821 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 10:13:36.315832 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 10:13:36.315849 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 10:13:36.315860 | orchestrator | + echo 2025-10-09 10:13:36.315872 | orchestrator | + echo '# PULL IMAGES' 2025-10-09 10:13:36.315883 | orchestrator | + echo 2025-10-09 10:13:36.315907 | orchestrator | ++ semver 9.4.0 7.0.0 2025-10-09 10:13:36.381798 | orchestrator | + [[ 1 -ge 0 ]] 2025-10-09 10:13:36.381855 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-10-09 10:13:38.396731 | orchestrator | 2025-10-09 10:13:38 | INFO  | Trying to run play pull-images in environment custom 2025-10-09 10:13:48.567922 | orchestrator | 2025-10-09 10:13:48 | INFO  | Task ae4dbd8e-6ff9-4303-b8e8-0a2568d0db0b (pull-images) was prepared for execution. 2025-10-09 10:13:48.568042 | orchestrator | 2025-10-09 10:13:48 | INFO  | Task ae4dbd8e-6ff9-4303-b8e8-0a2568d0db0b is running in background. No more output. Check ARA for logs. 2025-10-09 10:13:50.994630 | orchestrator | 2025-10-09 10:13:50 | INFO  | Trying to run play wipe-partitions in environment custom 2025-10-09 10:14:01.129181 | orchestrator | 2025-10-09 10:14:01 | INFO  | Task 47960cf0-2e83-4fac-89f8-3e4dfcc35d5d (wipe-partitions) was prepared for execution. 2025-10-09 10:14:01.129284 | orchestrator | 2025-10-09 10:14:01 | INFO  | It takes a moment until task 47960cf0-2e83-4fac-89f8-3e4dfcc35d5d (wipe-partitions) has been started and output is visible here. 2025-10-09 10:14:13.998164 | orchestrator | 2025-10-09 10:14:13.998294 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-10-09 10:14:13.998311 | orchestrator | 2025-10-09 10:14:13.998323 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-10-09 10:14:13.998394 | orchestrator | Thursday 09 October 2025 10:14:05 +0000 (0:00:00.145) 0:00:00.145 ****** 2025-10-09 10:14:13.998410 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:14:13.998422 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:14:13.998433 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:14:13.998444 | orchestrator | 2025-10-09 10:14:13.998456 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-10-09 10:14:13.998495 | orchestrator | Thursday 09 October 2025 10:14:06 +0000 (0:00:00.587) 0:00:00.733 ****** 2025-10-09 10:14:13.998507 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:13.998518 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:14:13.998528 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:14:13.998543 | orchestrator | 2025-10-09 10:14:13.998554 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-10-09 10:14:13.998565 | orchestrator | Thursday 09 October 2025 10:14:06 +0000 (0:00:00.381) 0:00:01.115 ****** 2025-10-09 10:14:13.998576 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:14:13.998587 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:14:13.998598 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:14:13.998608 | orchestrator | 2025-10-09 10:14:13.998619 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-10-09 10:14:13.998630 | orchestrator | Thursday 09 October 2025 10:14:07 +0000 (0:00:00.595) 0:00:01.710 ****** 2025-10-09 10:14:13.998641 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:13.998652 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:14:13.998662 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:14:13.998673 | orchestrator | 2025-10-09 10:14:13.998684 | orchestrator | TASK [Check device availability] *********************************************** 2025-10-09 10:14:13.998708 | orchestrator | Thursday 09 October 2025 10:14:07 +0000 (0:00:00.286) 0:00:01.997 ****** 2025-10-09 10:14:13.998719 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-10-09 10:14:13.998744 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-10-09 10:14:13.998755 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-10-09 10:14:13.998766 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-10-09 10:14:13.998777 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-10-09 10:14:13.998788 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-10-09 10:14:13.998798 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-10-09 10:14:13.998809 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-10-09 10:14:13.998819 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-10-09 10:14:13.998830 | orchestrator | 2025-10-09 10:14:13.998841 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-10-09 10:14:13.998852 | orchestrator | Thursday 09 October 2025 10:14:08 +0000 (0:00:01.227) 0:00:03.224 ****** 2025-10-09 10:14:13.998863 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-10-09 10:14:13.998874 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-10-09 10:14:13.998885 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-10-09 10:14:13.998896 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-10-09 10:14:13.998907 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-10-09 10:14:13.998917 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-10-09 10:14:13.998928 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-10-09 10:14:13.998938 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-10-09 10:14:13.998949 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-10-09 10:14:13.998960 | orchestrator | 2025-10-09 10:14:13.998971 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-10-09 10:14:13.998981 | orchestrator | Thursday 09 October 2025 10:14:10 +0000 (0:00:01.565) 0:00:04.790 ****** 2025-10-09 10:14:13.998992 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-10-09 10:14:13.999003 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-10-09 10:14:13.999013 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-10-09 10:14:13.999023 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-10-09 10:14:13.999034 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-10-09 10:14:13.999045 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-10-09 10:14:13.999055 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-10-09 10:14:13.999066 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-10-09 10:14:13.999091 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-10-09 10:14:13.999103 | orchestrator | 2025-10-09 10:14:13.999114 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-10-09 10:14:13.999125 | orchestrator | Thursday 09 October 2025 10:14:12 +0000 (0:00:02.080) 0:00:06.870 ****** 2025-10-09 10:14:13.999135 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:14:13.999146 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:14:13.999157 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:14:13.999167 | orchestrator | 2025-10-09 10:14:13.999178 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-10-09 10:14:13.999189 | orchestrator | Thursday 09 October 2025 10:14:12 +0000 (0:00:00.625) 0:00:07.496 ****** 2025-10-09 10:14:13.999200 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:14:13.999210 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:14:13.999221 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:14:13.999232 | orchestrator | 2025-10-09 10:14:13.999242 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:14:13.999254 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:13.999266 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:13.999296 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:13.999307 | orchestrator | 2025-10-09 10:14:13.999317 | orchestrator | 2025-10-09 10:14:13.999328 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:14:13.999373 | orchestrator | Thursday 09 October 2025 10:14:13 +0000 (0:00:00.632) 0:00:08.129 ****** 2025-10-09 10:14:13.999387 | orchestrator | =============================================================================== 2025-10-09 10:14:13.999397 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.08s 2025-10-09 10:14:13.999408 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2025-10-09 10:14:13.999419 | orchestrator | Check device availability ----------------------------------------------- 1.23s 2025-10-09 10:14:13.999430 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-10-09 10:14:13.999447 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-10-09 10:14:13.999466 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2025-10-09 10:14:13.999486 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-10-09 10:14:13.999503 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2025-10-09 10:14:13.999521 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2025-10-09 10:14:26.406912 | orchestrator | 2025-10-09 10:14:26 | INFO  | Task 7c49fc9a-58e3-403b-b59b-8b0dab7882d1 (facts) was prepared for execution. 2025-10-09 10:14:26.407006 | orchestrator | 2025-10-09 10:14:26 | INFO  | It takes a moment until task 7c49fc9a-58e3-403b-b59b-8b0dab7882d1 (facts) has been started and output is visible here. 2025-10-09 10:14:39.208804 | orchestrator | 2025-10-09 10:14:39.208910 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-09 10:14:39.208927 | orchestrator | 2025-10-09 10:14:39.208939 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-09 10:14:39.208950 | orchestrator | Thursday 09 October 2025 10:14:30 +0000 (0:00:00.306) 0:00:00.306 ****** 2025-10-09 10:14:39.208960 | orchestrator | ok: [testbed-manager] 2025-10-09 10:14:39.208971 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:14:39.208981 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:14:39.208991 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:14:39.209024 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:14:39.209034 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:14:39.209044 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:14:39.209053 | orchestrator | 2025-10-09 10:14:39.209063 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-09 10:14:39.209073 | orchestrator | Thursday 09 October 2025 10:14:31 +0000 (0:00:01.139) 0:00:01.445 ****** 2025-10-09 10:14:39.209082 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:14:39.209093 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:14:39.209102 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:14:39.209112 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:14:39.209121 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:39.209131 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:14:39.209140 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:14:39.209150 | orchestrator | 2025-10-09 10:14:39.209159 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:14:39.209169 | orchestrator | 2025-10-09 10:14:39.209194 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:14:39.209204 | orchestrator | Thursday 09 October 2025 10:14:33 +0000 (0:00:01.378) 0:00:02.824 ****** 2025-10-09 10:14:39.209214 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:14:39.209223 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:14:39.209233 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:14:39.209243 | orchestrator | ok: [testbed-manager] 2025-10-09 10:14:39.209253 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:14:39.209262 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:14:39.209272 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:14:39.209281 | orchestrator | 2025-10-09 10:14:39.209291 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-09 10:14:39.209300 | orchestrator | 2025-10-09 10:14:39.209310 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-09 10:14:39.209319 | orchestrator | Thursday 09 October 2025 10:14:38 +0000 (0:00:04.907) 0:00:07.731 ****** 2025-10-09 10:14:39.209329 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:14:39.209371 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:14:39.209383 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:14:39.209395 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:14:39.209406 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:39.209416 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:14:39.209426 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:14:39.209437 | orchestrator | 2025-10-09 10:14:39.209448 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:14:39.209459 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:39.209471 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:39.209482 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:39.209493 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:39.209503 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:39.209514 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:39.209525 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:14:39.209536 | orchestrator | 2025-10-09 10:14:39.209546 | orchestrator | 2025-10-09 10:14:39.209557 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:14:39.209577 | orchestrator | Thursday 09 October 2025 10:14:38 +0000 (0:00:00.573) 0:00:08.305 ****** 2025-10-09 10:14:39.209588 | orchestrator | =============================================================================== 2025-10-09 10:14:39.209599 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.91s 2025-10-09 10:14:39.209609 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2025-10-09 10:14:39.209621 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2025-10-09 10:14:39.209631 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-10-09 10:14:41.658614 | orchestrator | 2025-10-09 10:14:41 | INFO  | Task a2362a01-ef8e-44d9-ba30-697823735771 (ceph-configure-lvm-volumes) was prepared for execution. 2025-10-09 10:14:41.658717 | orchestrator | 2025-10-09 10:14:41 | INFO  | It takes a moment until task a2362a01-ef8e-44d9-ba30-697823735771 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-10-09 10:14:54.033984 | orchestrator | 2025-10-09 10:14:54.034183 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-09 10:14:54.034211 | orchestrator | 2025-10-09 10:14:54.034224 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:14:54.034236 | orchestrator | Thursday 09 October 2025 10:14:46 +0000 (0:00:00.363) 0:00:00.363 ****** 2025-10-09 10:14:54.034248 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:14:54.034260 | orchestrator | 2025-10-09 10:14:54.034271 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:14:54.034282 | orchestrator | Thursday 09 October 2025 10:14:46 +0000 (0:00:00.283) 0:00:00.647 ****** 2025-10-09 10:14:54.034293 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:14:54.034306 | orchestrator | 2025-10-09 10:14:54.034317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034327 | orchestrator | Thursday 09 October 2025 10:14:46 +0000 (0:00:00.286) 0:00:00.934 ****** 2025-10-09 10:14:54.034389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-10-09 10:14:54.034402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-10-09 10:14:54.034424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-10-09 10:14:54.034436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-10-09 10:14:54.034447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-10-09 10:14:54.034458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-10-09 10:14:54.034469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-10-09 10:14:54.034480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-10-09 10:14:54.034491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-10-09 10:14:54.034502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-10-09 10:14:54.034515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-10-09 10:14:54.034528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-10-09 10:14:54.034540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-10-09 10:14:54.034552 | orchestrator | 2025-10-09 10:14:54.034565 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034577 | orchestrator | Thursday 09 October 2025 10:14:47 +0000 (0:00:00.395) 0:00:01.329 ****** 2025-10-09 10:14:54.034589 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.034602 | orchestrator | 2025-10-09 10:14:54.034634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034647 | orchestrator | Thursday 09 October 2025 10:14:47 +0000 (0:00:00.490) 0:00:01.820 ****** 2025-10-09 10:14:54.034659 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.034671 | orchestrator | 2025-10-09 10:14:54.034684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034696 | orchestrator | Thursday 09 October 2025 10:14:47 +0000 (0:00:00.213) 0:00:02.035 ****** 2025-10-09 10:14:54.034708 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.034721 | orchestrator | 2025-10-09 10:14:54.034733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034746 | orchestrator | Thursday 09 October 2025 10:14:48 +0000 (0:00:00.210) 0:00:02.245 ****** 2025-10-09 10:14:54.034758 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.034770 | orchestrator | 2025-10-09 10:14:54.034787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034800 | orchestrator | Thursday 09 October 2025 10:14:48 +0000 (0:00:00.215) 0:00:02.460 ****** 2025-10-09 10:14:54.034812 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.034825 | orchestrator | 2025-10-09 10:14:54.034838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034851 | orchestrator | Thursday 09 October 2025 10:14:48 +0000 (0:00:00.222) 0:00:02.683 ****** 2025-10-09 10:14:54.034863 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.034874 | orchestrator | 2025-10-09 10:14:54.034885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034895 | orchestrator | Thursday 09 October 2025 10:14:48 +0000 (0:00:00.190) 0:00:02.874 ****** 2025-10-09 10:14:54.034906 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.034917 | orchestrator | 2025-10-09 10:14:54.034927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034938 | orchestrator | Thursday 09 October 2025 10:14:48 +0000 (0:00:00.224) 0:00:03.098 ****** 2025-10-09 10:14:54.034949 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.034959 | orchestrator | 2025-10-09 10:14:54.034970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.034981 | orchestrator | Thursday 09 October 2025 10:14:49 +0000 (0:00:00.208) 0:00:03.307 ****** 2025-10-09 10:14:54.034992 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f) 2025-10-09 10:14:54.035004 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f) 2025-10-09 10:14:54.035015 | orchestrator | 2025-10-09 10:14:54.035025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.035036 | orchestrator | Thursday 09 October 2025 10:14:49 +0000 (0:00:00.402) 0:00:03.710 ****** 2025-10-09 10:14:54.035064 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16) 2025-10-09 10:14:54.035076 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16) 2025-10-09 10:14:54.035087 | orchestrator | 2025-10-09 10:14:54.035097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.035114 | orchestrator | Thursday 09 October 2025 10:14:49 +0000 (0:00:00.457) 0:00:04.167 ****** 2025-10-09 10:14:54.035125 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057) 2025-10-09 10:14:54.035136 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057) 2025-10-09 10:14:54.035147 | orchestrator | 2025-10-09 10:14:54.035157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.035168 | orchestrator | Thursday 09 October 2025 10:14:50 +0000 (0:00:00.655) 0:00:04.823 ****** 2025-10-09 10:14:54.035179 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2) 2025-10-09 10:14:54.035197 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2) 2025-10-09 10:14:54.035207 | orchestrator | 2025-10-09 10:14:54.035218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:14:54.035229 | orchestrator | Thursday 09 October 2025 10:14:51 +0000 (0:00:00.876) 0:00:05.699 ****** 2025-10-09 10:14:54.035240 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:14:54.035250 | orchestrator | 2025-10-09 10:14:54.035261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035271 | orchestrator | Thursday 09 October 2025 10:14:51 +0000 (0:00:00.325) 0:00:06.025 ****** 2025-10-09 10:14:54.035282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-10-09 10:14:54.035292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-10-09 10:14:54.035303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-10-09 10:14:54.035314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-10-09 10:14:54.035324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-10-09 10:14:54.035353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-10-09 10:14:54.035364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-10-09 10:14:54.035375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-10-09 10:14:54.035386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-10-09 10:14:54.035396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-10-09 10:14:54.035407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-10-09 10:14:54.035418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-10-09 10:14:54.035428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-10-09 10:14:54.035439 | orchestrator | 2025-10-09 10:14:54.035450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035461 | orchestrator | Thursday 09 October 2025 10:14:52 +0000 (0:00:00.383) 0:00:06.408 ****** 2025-10-09 10:14:54.035471 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.035482 | orchestrator | 2025-10-09 10:14:54.035493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035504 | orchestrator | Thursday 09 October 2025 10:14:52 +0000 (0:00:00.268) 0:00:06.677 ****** 2025-10-09 10:14:54.035514 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.035525 | orchestrator | 2025-10-09 10:14:54.035536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035547 | orchestrator | Thursday 09 October 2025 10:14:52 +0000 (0:00:00.230) 0:00:06.907 ****** 2025-10-09 10:14:54.035557 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.035568 | orchestrator | 2025-10-09 10:14:54.035579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035590 | orchestrator | Thursday 09 October 2025 10:14:52 +0000 (0:00:00.240) 0:00:07.147 ****** 2025-10-09 10:14:54.035600 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.035611 | orchestrator | 2025-10-09 10:14:54.035622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035633 | orchestrator | Thursday 09 October 2025 10:14:53 +0000 (0:00:00.216) 0:00:07.364 ****** 2025-10-09 10:14:54.035643 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.035654 | orchestrator | 2025-10-09 10:14:54.035665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035684 | orchestrator | Thursday 09 October 2025 10:14:53 +0000 (0:00:00.219) 0:00:07.584 ****** 2025-10-09 10:14:54.035695 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.035706 | orchestrator | 2025-10-09 10:14:54.035716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035727 | orchestrator | Thursday 09 October 2025 10:14:53 +0000 (0:00:00.202) 0:00:07.786 ****** 2025-10-09 10:14:54.035738 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:14:54.035749 | orchestrator | 2025-10-09 10:14:54.035759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:14:54.035770 | orchestrator | Thursday 09 October 2025 10:14:53 +0000 (0:00:00.225) 0:00:08.011 ****** 2025-10-09 10:14:54.035787 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.942804 | orchestrator | 2025-10-09 10:15:01.942912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:01.942929 | orchestrator | Thursday 09 October 2025 10:14:54 +0000 (0:00:00.198) 0:00:08.210 ****** 2025-10-09 10:15:01.942942 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-10-09 10:15:01.942955 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-10-09 10:15:01.942966 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-10-09 10:15:01.942977 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-10-09 10:15:01.942988 | orchestrator | 2025-10-09 10:15:01.942999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:01.943031 | orchestrator | Thursday 09 October 2025 10:14:55 +0000 (0:00:01.126) 0:00:09.337 ****** 2025-10-09 10:15:01.943042 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943053 | orchestrator | 2025-10-09 10:15:01.943064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:01.943075 | orchestrator | Thursday 09 October 2025 10:14:55 +0000 (0:00:00.204) 0:00:09.541 ****** 2025-10-09 10:15:01.943086 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943097 | orchestrator | 2025-10-09 10:15:01.943108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:01.943119 | orchestrator | Thursday 09 October 2025 10:14:55 +0000 (0:00:00.229) 0:00:09.770 ****** 2025-10-09 10:15:01.943130 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943141 | orchestrator | 2025-10-09 10:15:01.943151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:01.943162 | orchestrator | Thursday 09 October 2025 10:14:55 +0000 (0:00:00.218) 0:00:09.988 ****** 2025-10-09 10:15:01.943173 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943184 | orchestrator | 2025-10-09 10:15:01.943195 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-09 10:15:01.943206 | orchestrator | Thursday 09 October 2025 10:14:56 +0000 (0:00:00.204) 0:00:10.193 ****** 2025-10-09 10:15:01.943217 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-10-09 10:15:01.943227 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-10-09 10:15:01.943238 | orchestrator | 2025-10-09 10:15:01.943249 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-09 10:15:01.943260 | orchestrator | Thursday 09 October 2025 10:14:56 +0000 (0:00:00.182) 0:00:10.375 ****** 2025-10-09 10:15:01.943270 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943281 | orchestrator | 2025-10-09 10:15:01.943292 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-09 10:15:01.943303 | orchestrator | Thursday 09 October 2025 10:14:56 +0000 (0:00:00.148) 0:00:10.524 ****** 2025-10-09 10:15:01.943313 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943324 | orchestrator | 2025-10-09 10:15:01.943358 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-09 10:15:01.943369 | orchestrator | Thursday 09 October 2025 10:14:56 +0000 (0:00:00.145) 0:00:10.669 ****** 2025-10-09 10:15:01.943380 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943391 | orchestrator | 2025-10-09 10:15:01.943420 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-09 10:15:01.943431 | orchestrator | Thursday 09 October 2025 10:14:56 +0000 (0:00:00.145) 0:00:10.815 ****** 2025-10-09 10:15:01.943442 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:15:01.943453 | orchestrator | 2025-10-09 10:15:01.943464 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-09 10:15:01.943475 | orchestrator | Thursday 09 October 2025 10:14:56 +0000 (0:00:00.138) 0:00:10.954 ****** 2025-10-09 10:15:01.943487 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54591ae8-508f-52be-a28f-431401b6d1ad'}}) 2025-10-09 10:15:01.943498 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4e7e843-affd-596c-a641-84f7bc574f86'}}) 2025-10-09 10:15:01.943509 | orchestrator | 2025-10-09 10:15:01.943520 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-09 10:15:01.943531 | orchestrator | Thursday 09 October 2025 10:14:56 +0000 (0:00:00.189) 0:00:11.143 ****** 2025-10-09 10:15:01.943542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54591ae8-508f-52be-a28f-431401b6d1ad'}})  2025-10-09 10:15:01.943562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4e7e843-affd-596c-a641-84f7bc574f86'}})  2025-10-09 10:15:01.943573 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943584 | orchestrator | 2025-10-09 10:15:01.943595 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-09 10:15:01.943606 | orchestrator | Thursday 09 October 2025 10:14:57 +0000 (0:00:00.175) 0:00:11.319 ****** 2025-10-09 10:15:01.943617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54591ae8-508f-52be-a28f-431401b6d1ad'}})  2025-10-09 10:15:01.943628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4e7e843-affd-596c-a641-84f7bc574f86'}})  2025-10-09 10:15:01.943639 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943650 | orchestrator | 2025-10-09 10:15:01.943661 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-09 10:15:01.943671 | orchestrator | Thursday 09 October 2025 10:14:57 +0000 (0:00:00.369) 0:00:11.688 ****** 2025-10-09 10:15:01.943682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54591ae8-508f-52be-a28f-431401b6d1ad'}})  2025-10-09 10:15:01.943693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4e7e843-affd-596c-a641-84f7bc574f86'}})  2025-10-09 10:15:01.943705 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943716 | orchestrator | 2025-10-09 10:15:01.943745 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-09 10:15:01.943757 | orchestrator | Thursday 09 October 2025 10:14:57 +0000 (0:00:00.162) 0:00:11.851 ****** 2025-10-09 10:15:01.943767 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:15:01.943778 | orchestrator | 2025-10-09 10:15:01.943789 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-09 10:15:01.943800 | orchestrator | Thursday 09 October 2025 10:14:57 +0000 (0:00:00.144) 0:00:11.996 ****** 2025-10-09 10:15:01.943811 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:15:01.943822 | orchestrator | 2025-10-09 10:15:01.943833 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-09 10:15:01.943844 | orchestrator | Thursday 09 October 2025 10:14:57 +0000 (0:00:00.148) 0:00:12.144 ****** 2025-10-09 10:15:01.943855 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943866 | orchestrator | 2025-10-09 10:15:01.943877 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-09 10:15:01.943888 | orchestrator | Thursday 09 October 2025 10:14:58 +0000 (0:00:00.158) 0:00:12.303 ****** 2025-10-09 10:15:01.943898 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943909 | orchestrator | 2025-10-09 10:15:01.943920 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-09 10:15:01.943940 | orchestrator | Thursday 09 October 2025 10:14:58 +0000 (0:00:00.141) 0:00:12.444 ****** 2025-10-09 10:15:01.943951 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.943962 | orchestrator | 2025-10-09 10:15:01.943973 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-09 10:15:01.943984 | orchestrator | Thursday 09 October 2025 10:14:58 +0000 (0:00:00.169) 0:00:12.614 ****** 2025-10-09 10:15:01.943995 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:15:01.944005 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:15:01.944016 | orchestrator |  "sdb": { 2025-10-09 10:15:01.944027 | orchestrator |  "osd_lvm_uuid": "54591ae8-508f-52be-a28f-431401b6d1ad" 2025-10-09 10:15:01.944038 | orchestrator |  }, 2025-10-09 10:15:01.944049 | orchestrator |  "sdc": { 2025-10-09 10:15:01.944060 | orchestrator |  "osd_lvm_uuid": "b4e7e843-affd-596c-a641-84f7bc574f86" 2025-10-09 10:15:01.944071 | orchestrator |  } 2025-10-09 10:15:01.944082 | orchestrator |  } 2025-10-09 10:15:01.944093 | orchestrator | } 2025-10-09 10:15:01.944104 | orchestrator | 2025-10-09 10:15:01.944115 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-09 10:15:01.944126 | orchestrator | Thursday 09 October 2025 10:14:58 +0000 (0:00:00.142) 0:00:12.757 ****** 2025-10-09 10:15:01.944137 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.944148 | orchestrator | 2025-10-09 10:15:01.944159 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-09 10:15:01.944170 | orchestrator | Thursday 09 October 2025 10:14:58 +0000 (0:00:00.153) 0:00:12.910 ****** 2025-10-09 10:15:01.944186 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.944197 | orchestrator | 2025-10-09 10:15:01.944208 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-09 10:15:01.944219 | orchestrator | Thursday 09 October 2025 10:14:58 +0000 (0:00:00.145) 0:00:13.056 ****** 2025-10-09 10:15:01.944230 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:01.944241 | orchestrator | 2025-10-09 10:15:01.944252 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-09 10:15:01.944263 | orchestrator | Thursday 09 October 2025 10:14:59 +0000 (0:00:00.130) 0:00:13.186 ****** 2025-10-09 10:15:01.944273 | orchestrator | changed: [testbed-node-3] => { 2025-10-09 10:15:01.944284 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-09 10:15:01.944295 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:15:01.944306 | orchestrator |  "sdb": { 2025-10-09 10:15:01.944317 | orchestrator |  "osd_lvm_uuid": "54591ae8-508f-52be-a28f-431401b6d1ad" 2025-10-09 10:15:01.944328 | orchestrator |  }, 2025-10-09 10:15:01.944369 | orchestrator |  "sdc": { 2025-10-09 10:15:01.944380 | orchestrator |  "osd_lvm_uuid": "b4e7e843-affd-596c-a641-84f7bc574f86" 2025-10-09 10:15:01.944391 | orchestrator |  } 2025-10-09 10:15:01.944402 | orchestrator |  }, 2025-10-09 10:15:01.944413 | orchestrator |  "lvm_volumes": [ 2025-10-09 10:15:01.944424 | orchestrator |  { 2025-10-09 10:15:01.944434 | orchestrator |  "data": "osd-block-54591ae8-508f-52be-a28f-431401b6d1ad", 2025-10-09 10:15:01.944445 | orchestrator |  "data_vg": "ceph-54591ae8-508f-52be-a28f-431401b6d1ad" 2025-10-09 10:15:01.944456 | orchestrator |  }, 2025-10-09 10:15:01.944467 | orchestrator |  { 2025-10-09 10:15:01.944478 | orchestrator |  "data": "osd-block-b4e7e843-affd-596c-a641-84f7bc574f86", 2025-10-09 10:15:01.944489 | orchestrator |  "data_vg": "ceph-b4e7e843-affd-596c-a641-84f7bc574f86" 2025-10-09 10:15:01.944499 | orchestrator |  } 2025-10-09 10:15:01.944510 | orchestrator |  ] 2025-10-09 10:15:01.944521 | orchestrator |  } 2025-10-09 10:15:01.944532 | orchestrator | } 2025-10-09 10:15:01.944543 | orchestrator | 2025-10-09 10:15:01.944554 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-09 10:15:01.944565 | orchestrator | Thursday 09 October 2025 10:14:59 +0000 (0:00:00.423) 0:00:13.610 ****** 2025-10-09 10:15:01.944582 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:15:01.944593 | orchestrator | 2025-10-09 10:15:01.944604 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-09 10:15:01.944615 | orchestrator | 2025-10-09 10:15:01.944626 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:15:01.944636 | orchestrator | Thursday 09 October 2025 10:15:01 +0000 (0:00:01.895) 0:00:15.505 ****** 2025-10-09 10:15:01.944647 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-09 10:15:01.944658 | orchestrator | 2025-10-09 10:15:01.944669 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:15:01.944680 | orchestrator | Thursday 09 October 2025 10:15:01 +0000 (0:00:00.334) 0:00:15.839 ****** 2025-10-09 10:15:01.944690 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:15:01.944701 | orchestrator | 2025-10-09 10:15:01.944712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:01.944730 | orchestrator | Thursday 09 October 2025 10:15:01 +0000 (0:00:00.279) 0:00:16.119 ****** 2025-10-09 10:15:10.886556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-10-09 10:15:10.886647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-10-09 10:15:10.886663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-10-09 10:15:10.886674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-10-09 10:15:10.886686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-10-09 10:15:10.886696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-10-09 10:15:10.886707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-10-09 10:15:10.886717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-10-09 10:15:10.886728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-10-09 10:15:10.886739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-10-09 10:15:10.886767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-10-09 10:15:10.886778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-10-09 10:15:10.886789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-10-09 10:15:10.886800 | orchestrator | 2025-10-09 10:15:10.886816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.886828 | orchestrator | Thursday 09 October 2025 10:15:02 +0000 (0:00:00.400) 0:00:16.519 ****** 2025-10-09 10:15:10.886840 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.886851 | orchestrator | 2025-10-09 10:15:10.886862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.886873 | orchestrator | Thursday 09 October 2025 10:15:02 +0000 (0:00:00.264) 0:00:16.783 ****** 2025-10-09 10:15:10.886884 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.886894 | orchestrator | 2025-10-09 10:15:10.886905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.886916 | orchestrator | Thursday 09 October 2025 10:15:02 +0000 (0:00:00.222) 0:00:17.006 ****** 2025-10-09 10:15:10.886927 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.886937 | orchestrator | 2025-10-09 10:15:10.886948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.886959 | orchestrator | Thursday 09 October 2025 10:15:03 +0000 (0:00:00.219) 0:00:17.226 ****** 2025-10-09 10:15:10.886970 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.886981 | orchestrator | 2025-10-09 10:15:10.887009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887020 | orchestrator | Thursday 09 October 2025 10:15:03 +0000 (0:00:00.197) 0:00:17.424 ****** 2025-10-09 10:15:10.887031 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887041 | orchestrator | 2025-10-09 10:15:10.887052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887063 | orchestrator | Thursday 09 October 2025 10:15:03 +0000 (0:00:00.723) 0:00:18.147 ****** 2025-10-09 10:15:10.887074 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887084 | orchestrator | 2025-10-09 10:15:10.887095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887106 | orchestrator | Thursday 09 October 2025 10:15:04 +0000 (0:00:00.248) 0:00:18.395 ****** 2025-10-09 10:15:10.887117 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887127 | orchestrator | 2025-10-09 10:15:10.887138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887149 | orchestrator | Thursday 09 October 2025 10:15:04 +0000 (0:00:00.244) 0:00:18.639 ****** 2025-10-09 10:15:10.887160 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887170 | orchestrator | 2025-10-09 10:15:10.887181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887192 | orchestrator | Thursday 09 October 2025 10:15:04 +0000 (0:00:00.207) 0:00:18.847 ****** 2025-10-09 10:15:10.887203 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e) 2025-10-09 10:15:10.887215 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e) 2025-10-09 10:15:10.887225 | orchestrator | 2025-10-09 10:15:10.887236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887247 | orchestrator | Thursday 09 October 2025 10:15:05 +0000 (0:00:00.506) 0:00:19.354 ****** 2025-10-09 10:15:10.887257 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b) 2025-10-09 10:15:10.887268 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b) 2025-10-09 10:15:10.887279 | orchestrator | 2025-10-09 10:15:10.887289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887300 | orchestrator | Thursday 09 October 2025 10:15:05 +0000 (0:00:00.482) 0:00:19.836 ****** 2025-10-09 10:15:10.887311 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c) 2025-10-09 10:15:10.887322 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c) 2025-10-09 10:15:10.887357 | orchestrator | 2025-10-09 10:15:10.887368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887379 | orchestrator | Thursday 09 October 2025 10:15:06 +0000 (0:00:00.428) 0:00:20.265 ****** 2025-10-09 10:15:10.887407 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256) 2025-10-09 10:15:10.887418 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256) 2025-10-09 10:15:10.887429 | orchestrator | 2025-10-09 10:15:10.887440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:10.887451 | orchestrator | Thursday 09 October 2025 10:15:06 +0000 (0:00:00.499) 0:00:20.765 ****** 2025-10-09 10:15:10.887462 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:15:10.887473 | orchestrator | 2025-10-09 10:15:10.887483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887501 | orchestrator | Thursday 09 October 2025 10:15:06 +0000 (0:00:00.375) 0:00:21.140 ****** 2025-10-09 10:15:10.887512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-10-09 10:15:10.887523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-10-09 10:15:10.887541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-10-09 10:15:10.887551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-10-09 10:15:10.887562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-10-09 10:15:10.887573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-10-09 10:15:10.887584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-10-09 10:15:10.887595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-10-09 10:15:10.887605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-10-09 10:15:10.887616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-10-09 10:15:10.887627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-10-09 10:15:10.887637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-10-09 10:15:10.887648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-10-09 10:15:10.887659 | orchestrator | 2025-10-09 10:15:10.887669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887680 | orchestrator | Thursday 09 October 2025 10:15:07 +0000 (0:00:00.452) 0:00:21.593 ****** 2025-10-09 10:15:10.887691 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887702 | orchestrator | 2025-10-09 10:15:10.887713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887723 | orchestrator | Thursday 09 October 2025 10:15:07 +0000 (0:00:00.227) 0:00:21.821 ****** 2025-10-09 10:15:10.887734 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887745 | orchestrator | 2025-10-09 10:15:10.887756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887767 | orchestrator | Thursday 09 October 2025 10:15:08 +0000 (0:00:00.732) 0:00:22.553 ****** 2025-10-09 10:15:10.887777 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887788 | orchestrator | 2025-10-09 10:15:10.887799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887810 | orchestrator | Thursday 09 October 2025 10:15:08 +0000 (0:00:00.203) 0:00:22.757 ****** 2025-10-09 10:15:10.887820 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887831 | orchestrator | 2025-10-09 10:15:10.887842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887853 | orchestrator | Thursday 09 October 2025 10:15:08 +0000 (0:00:00.218) 0:00:22.975 ****** 2025-10-09 10:15:10.887864 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887874 | orchestrator | 2025-10-09 10:15:10.887885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887896 | orchestrator | Thursday 09 October 2025 10:15:08 +0000 (0:00:00.207) 0:00:23.182 ****** 2025-10-09 10:15:10.887907 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887918 | orchestrator | 2025-10-09 10:15:10.887928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887939 | orchestrator | Thursday 09 October 2025 10:15:09 +0000 (0:00:00.205) 0:00:23.388 ****** 2025-10-09 10:15:10.887950 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.887960 | orchestrator | 2025-10-09 10:15:10.887971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.887982 | orchestrator | Thursday 09 October 2025 10:15:09 +0000 (0:00:00.207) 0:00:23.595 ****** 2025-10-09 10:15:10.887993 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.888004 | orchestrator | 2025-10-09 10:15:10.888014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.888031 | orchestrator | Thursday 09 October 2025 10:15:09 +0000 (0:00:00.231) 0:00:23.827 ****** 2025-10-09 10:15:10.888042 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-10-09 10:15:10.888053 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-10-09 10:15:10.888064 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-10-09 10:15:10.888075 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-10-09 10:15:10.888086 | orchestrator | 2025-10-09 10:15:10.888097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:10.888108 | orchestrator | Thursday 09 October 2025 10:15:10 +0000 (0:00:00.998) 0:00:24.825 ****** 2025-10-09 10:15:10.888119 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:10.888129 | orchestrator | 2025-10-09 10:15:10.888147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:18.429497 | orchestrator | Thursday 09 October 2025 10:15:10 +0000 (0:00:00.236) 0:00:25.062 ****** 2025-10-09 10:15:18.429608 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.429625 | orchestrator | 2025-10-09 10:15:18.429638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:18.429649 | orchestrator | Thursday 09 October 2025 10:15:11 +0000 (0:00:00.218) 0:00:25.280 ****** 2025-10-09 10:15:18.429660 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.429672 | orchestrator | 2025-10-09 10:15:18.429683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:18.429694 | orchestrator | Thursday 09 October 2025 10:15:11 +0000 (0:00:00.184) 0:00:25.465 ****** 2025-10-09 10:15:18.429705 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.429716 | orchestrator | 2025-10-09 10:15:18.429748 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-09 10:15:18.429760 | orchestrator | Thursday 09 October 2025 10:15:12 +0000 (0:00:00.737) 0:00:26.202 ****** 2025-10-09 10:15:18.429771 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-10-09 10:15:18.429782 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-10-09 10:15:18.429793 | orchestrator | 2025-10-09 10:15:18.429804 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-09 10:15:18.429815 | orchestrator | Thursday 09 October 2025 10:15:12 +0000 (0:00:00.194) 0:00:26.397 ****** 2025-10-09 10:15:18.429826 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.429837 | orchestrator | 2025-10-09 10:15:18.429848 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-09 10:15:18.429859 | orchestrator | Thursday 09 October 2025 10:15:12 +0000 (0:00:00.136) 0:00:26.534 ****** 2025-10-09 10:15:18.429871 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.429882 | orchestrator | 2025-10-09 10:15:18.429893 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-09 10:15:18.429904 | orchestrator | Thursday 09 October 2025 10:15:12 +0000 (0:00:00.154) 0:00:26.688 ****** 2025-10-09 10:15:18.429915 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.429926 | orchestrator | 2025-10-09 10:15:18.429936 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-09 10:15:18.429948 | orchestrator | Thursday 09 October 2025 10:15:12 +0000 (0:00:00.136) 0:00:26.825 ****** 2025-10-09 10:15:18.429959 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:15:18.429971 | orchestrator | 2025-10-09 10:15:18.429982 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-09 10:15:18.429994 | orchestrator | Thursday 09 October 2025 10:15:12 +0000 (0:00:00.147) 0:00:26.972 ****** 2025-10-09 10:15:18.430007 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}}) 2025-10-09 10:15:18.430078 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd550970-9f11-5339-ba56-252bcc7474b0'}}) 2025-10-09 10:15:18.430093 | orchestrator | 2025-10-09 10:15:18.430105 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-09 10:15:18.430138 | orchestrator | Thursday 09 October 2025 10:15:12 +0000 (0:00:00.175) 0:00:27.148 ****** 2025-10-09 10:15:18.430151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}})  2025-10-09 10:15:18.430165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd550970-9f11-5339-ba56-252bcc7474b0'}})  2025-10-09 10:15:18.430177 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430190 | orchestrator | 2025-10-09 10:15:18.430202 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-09 10:15:18.430214 | orchestrator | Thursday 09 October 2025 10:15:13 +0000 (0:00:00.184) 0:00:27.333 ****** 2025-10-09 10:15:18.430227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}})  2025-10-09 10:15:18.430239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd550970-9f11-5339-ba56-252bcc7474b0'}})  2025-10-09 10:15:18.430252 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430264 | orchestrator | 2025-10-09 10:15:18.430275 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-09 10:15:18.430288 | orchestrator | Thursday 09 October 2025 10:15:13 +0000 (0:00:00.172) 0:00:27.505 ****** 2025-10-09 10:15:18.430300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}})  2025-10-09 10:15:18.430312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd550970-9f11-5339-ba56-252bcc7474b0'}})  2025-10-09 10:15:18.430325 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430371 | orchestrator | 2025-10-09 10:15:18.430383 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-09 10:15:18.430393 | orchestrator | Thursday 09 October 2025 10:15:13 +0000 (0:00:00.202) 0:00:27.708 ****** 2025-10-09 10:15:18.430404 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:15:18.430415 | orchestrator | 2025-10-09 10:15:18.430426 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-09 10:15:18.430437 | orchestrator | Thursday 09 October 2025 10:15:13 +0000 (0:00:00.178) 0:00:27.887 ****** 2025-10-09 10:15:18.430448 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:15:18.430459 | orchestrator | 2025-10-09 10:15:18.430470 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-09 10:15:18.430481 | orchestrator | Thursday 09 October 2025 10:15:13 +0000 (0:00:00.151) 0:00:28.038 ****** 2025-10-09 10:15:18.430492 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430503 | orchestrator | 2025-10-09 10:15:18.430533 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-09 10:15:18.430544 | orchestrator | Thursday 09 October 2025 10:15:14 +0000 (0:00:00.413) 0:00:28.452 ****** 2025-10-09 10:15:18.430555 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430566 | orchestrator | 2025-10-09 10:15:18.430577 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-09 10:15:18.430588 | orchestrator | Thursday 09 October 2025 10:15:14 +0000 (0:00:00.129) 0:00:28.581 ****** 2025-10-09 10:15:18.430599 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430609 | orchestrator | 2025-10-09 10:15:18.430620 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-09 10:15:18.430631 | orchestrator | Thursday 09 October 2025 10:15:14 +0000 (0:00:00.146) 0:00:28.728 ****** 2025-10-09 10:15:18.430642 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:15:18.430653 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:15:18.430663 | orchestrator |  "sdb": { 2025-10-09 10:15:18.430674 | orchestrator |  "osd_lvm_uuid": "ef021fd3-cbd8-5c2d-a302-cb22746ffda4" 2025-10-09 10:15:18.430685 | orchestrator |  }, 2025-10-09 10:15:18.430696 | orchestrator |  "sdc": { 2025-10-09 10:15:18.430706 | orchestrator |  "osd_lvm_uuid": "dd550970-9f11-5339-ba56-252bcc7474b0" 2025-10-09 10:15:18.430725 | orchestrator |  } 2025-10-09 10:15:18.430736 | orchestrator |  } 2025-10-09 10:15:18.430747 | orchestrator | } 2025-10-09 10:15:18.430758 | orchestrator | 2025-10-09 10:15:18.430768 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-09 10:15:18.430779 | orchestrator | Thursday 09 October 2025 10:15:14 +0000 (0:00:00.185) 0:00:28.914 ****** 2025-10-09 10:15:18.430790 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430801 | orchestrator | 2025-10-09 10:15:18.430818 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-09 10:15:18.430829 | orchestrator | Thursday 09 October 2025 10:15:14 +0000 (0:00:00.178) 0:00:29.093 ****** 2025-10-09 10:15:18.430840 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430851 | orchestrator | 2025-10-09 10:15:18.430861 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-09 10:15:18.430872 | orchestrator | Thursday 09 October 2025 10:15:15 +0000 (0:00:00.165) 0:00:29.259 ****** 2025-10-09 10:15:18.430883 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:18.430894 | orchestrator | 2025-10-09 10:15:18.430905 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-09 10:15:18.430916 | orchestrator | Thursday 09 October 2025 10:15:15 +0000 (0:00:00.176) 0:00:29.435 ****** 2025-10-09 10:15:18.430927 | orchestrator | changed: [testbed-node-4] => { 2025-10-09 10:15:18.430937 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-09 10:15:18.430948 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:15:18.430959 | orchestrator |  "sdb": { 2025-10-09 10:15:18.430970 | orchestrator |  "osd_lvm_uuid": "ef021fd3-cbd8-5c2d-a302-cb22746ffda4" 2025-10-09 10:15:18.430981 | orchestrator |  }, 2025-10-09 10:15:18.430996 | orchestrator |  "sdc": { 2025-10-09 10:15:18.431008 | orchestrator |  "osd_lvm_uuid": "dd550970-9f11-5339-ba56-252bcc7474b0" 2025-10-09 10:15:18.431019 | orchestrator |  } 2025-10-09 10:15:18.431030 | orchestrator |  }, 2025-10-09 10:15:18.431041 | orchestrator |  "lvm_volumes": [ 2025-10-09 10:15:18.431052 | orchestrator |  { 2025-10-09 10:15:18.431063 | orchestrator |  "data": "osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4", 2025-10-09 10:15:18.431074 | orchestrator |  "data_vg": "ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4" 2025-10-09 10:15:18.431084 | orchestrator |  }, 2025-10-09 10:15:18.431095 | orchestrator |  { 2025-10-09 10:15:18.431106 | orchestrator |  "data": "osd-block-dd550970-9f11-5339-ba56-252bcc7474b0", 2025-10-09 10:15:18.431117 | orchestrator |  "data_vg": "ceph-dd550970-9f11-5339-ba56-252bcc7474b0" 2025-10-09 10:15:18.431127 | orchestrator |  } 2025-10-09 10:15:18.431138 | orchestrator |  ] 2025-10-09 10:15:18.431149 | orchestrator |  } 2025-10-09 10:15:18.431159 | orchestrator | } 2025-10-09 10:15:18.431170 | orchestrator | 2025-10-09 10:15:18.431181 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-09 10:15:18.431192 | orchestrator | Thursday 09 October 2025 10:15:15 +0000 (0:00:00.289) 0:00:29.724 ****** 2025-10-09 10:15:18.431202 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-09 10:15:18.431213 | orchestrator | 2025-10-09 10:15:18.431224 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-09 10:15:18.431235 | orchestrator | 2025-10-09 10:15:18.431246 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:15:18.431256 | orchestrator | Thursday 09 October 2025 10:15:16 +0000 (0:00:01.306) 0:00:31.031 ****** 2025-10-09 10:15:18.431267 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-09 10:15:18.431278 | orchestrator | 2025-10-09 10:15:18.431289 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:15:18.431300 | orchestrator | Thursday 09 October 2025 10:15:17 +0000 (0:00:00.865) 0:00:31.897 ****** 2025-10-09 10:15:18.431311 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:15:18.431347 | orchestrator | 2025-10-09 10:15:18.431359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:18.431370 | orchestrator | Thursday 09 October 2025 10:15:17 +0000 (0:00:00.226) 0:00:32.123 ****** 2025-10-09 10:15:18.431381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-10-09 10:15:18.431392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-10-09 10:15:18.431404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-10-09 10:15:18.431414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-10-09 10:15:18.431425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-10-09 10:15:18.431436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-10-09 10:15:18.431453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-10-09 10:15:28.121516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-10-09 10:15:28.121622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-10-09 10:15:28.121638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-10-09 10:15:28.121649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-10-09 10:15:28.121660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-10-09 10:15:28.121672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-10-09 10:15:28.121683 | orchestrator | 2025-10-09 10:15:28.121696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.121708 | orchestrator | Thursday 09 October 2025 10:15:18 +0000 (0:00:00.479) 0:00:32.603 ****** 2025-10-09 10:15:28.121719 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.121731 | orchestrator | 2025-10-09 10:15:28.121742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.121754 | orchestrator | Thursday 09 October 2025 10:15:18 +0000 (0:00:00.281) 0:00:32.884 ****** 2025-10-09 10:15:28.121765 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.121776 | orchestrator | 2025-10-09 10:15:28.121787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.121798 | orchestrator | Thursday 09 October 2025 10:15:18 +0000 (0:00:00.292) 0:00:33.177 ****** 2025-10-09 10:15:28.121809 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.121820 | orchestrator | 2025-10-09 10:15:28.121831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.121842 | orchestrator | Thursday 09 October 2025 10:15:19 +0000 (0:00:00.214) 0:00:33.391 ****** 2025-10-09 10:15:28.121853 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.121864 | orchestrator | 2025-10-09 10:15:28.121874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.121885 | orchestrator | Thursday 09 October 2025 10:15:19 +0000 (0:00:00.207) 0:00:33.599 ****** 2025-10-09 10:15:28.121896 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.121907 | orchestrator | 2025-10-09 10:15:28.121918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.121929 | orchestrator | Thursday 09 October 2025 10:15:19 +0000 (0:00:00.190) 0:00:33.790 ****** 2025-10-09 10:15:28.121940 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.121951 | orchestrator | 2025-10-09 10:15:28.121962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.121973 | orchestrator | Thursday 09 October 2025 10:15:19 +0000 (0:00:00.217) 0:00:34.007 ****** 2025-10-09 10:15:28.121984 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.121995 | orchestrator | 2025-10-09 10:15:28.122085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.122101 | orchestrator | Thursday 09 October 2025 10:15:20 +0000 (0:00:00.239) 0:00:34.246 ****** 2025-10-09 10:15:28.122114 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.122135 | orchestrator | 2025-10-09 10:15:28.122163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.122176 | orchestrator | Thursday 09 October 2025 10:15:20 +0000 (0:00:00.239) 0:00:34.486 ****** 2025-10-09 10:15:28.122188 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633) 2025-10-09 10:15:28.122202 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633) 2025-10-09 10:15:28.122215 | orchestrator | 2025-10-09 10:15:28.122227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.122239 | orchestrator | Thursday 09 October 2025 10:15:21 +0000 (0:00:00.956) 0:00:35.442 ****** 2025-10-09 10:15:28.122252 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317) 2025-10-09 10:15:28.122264 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317) 2025-10-09 10:15:28.122276 | orchestrator | 2025-10-09 10:15:28.122288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.122300 | orchestrator | Thursday 09 October 2025 10:15:21 +0000 (0:00:00.615) 0:00:36.058 ****** 2025-10-09 10:15:28.122312 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5) 2025-10-09 10:15:28.122324 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5) 2025-10-09 10:15:28.122360 | orchestrator | 2025-10-09 10:15:28.122372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.122385 | orchestrator | Thursday 09 October 2025 10:15:22 +0000 (0:00:00.640) 0:00:36.699 ****** 2025-10-09 10:15:28.122397 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb) 2025-10-09 10:15:28.122409 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb) 2025-10-09 10:15:28.122419 | orchestrator | 2025-10-09 10:15:28.122430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:15:28.122441 | orchestrator | Thursday 09 October 2025 10:15:23 +0000 (0:00:00.532) 0:00:37.231 ****** 2025-10-09 10:15:28.122452 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:15:28.122462 | orchestrator | 2025-10-09 10:15:28.122473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122484 | orchestrator | Thursday 09 October 2025 10:15:23 +0000 (0:00:00.411) 0:00:37.643 ****** 2025-10-09 10:15:28.122512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-10-09 10:15:28.122524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-10-09 10:15:28.122535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-10-09 10:15:28.122545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-10-09 10:15:28.122556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-10-09 10:15:28.122567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-10-09 10:15:28.122577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-10-09 10:15:28.122588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-10-09 10:15:28.122598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-10-09 10:15:28.122622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-10-09 10:15:28.122633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-10-09 10:15:28.122643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-10-09 10:15:28.122654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-10-09 10:15:28.122665 | orchestrator | 2025-10-09 10:15:28.122675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122686 | orchestrator | Thursday 09 October 2025 10:15:23 +0000 (0:00:00.436) 0:00:38.079 ****** 2025-10-09 10:15:28.122696 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.122707 | orchestrator | 2025-10-09 10:15:28.122718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122729 | orchestrator | Thursday 09 October 2025 10:15:24 +0000 (0:00:00.234) 0:00:38.314 ****** 2025-10-09 10:15:28.122739 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.122750 | orchestrator | 2025-10-09 10:15:28.122761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122772 | orchestrator | Thursday 09 October 2025 10:15:24 +0000 (0:00:00.237) 0:00:38.551 ****** 2025-10-09 10:15:28.122782 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.122793 | orchestrator | 2025-10-09 10:15:28.122804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122815 | orchestrator | Thursday 09 October 2025 10:15:24 +0000 (0:00:00.218) 0:00:38.770 ****** 2025-10-09 10:15:28.122825 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.122836 | orchestrator | 2025-10-09 10:15:28.122847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122858 | orchestrator | Thursday 09 October 2025 10:15:24 +0000 (0:00:00.224) 0:00:38.995 ****** 2025-10-09 10:15:28.122869 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.122880 | orchestrator | 2025-10-09 10:15:28.122890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122901 | orchestrator | Thursday 09 October 2025 10:15:25 +0000 (0:00:00.254) 0:00:39.249 ****** 2025-10-09 10:15:28.122912 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.122922 | orchestrator | 2025-10-09 10:15:28.122933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122944 | orchestrator | Thursday 09 October 2025 10:15:25 +0000 (0:00:00.771) 0:00:40.021 ****** 2025-10-09 10:15:28.122954 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.122965 | orchestrator | 2025-10-09 10:15:28.122976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.122987 | orchestrator | Thursday 09 October 2025 10:15:26 +0000 (0:00:00.221) 0:00:40.242 ****** 2025-10-09 10:15:28.122997 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.123008 | orchestrator | 2025-10-09 10:15:28.123019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.123030 | orchestrator | Thursday 09 October 2025 10:15:26 +0000 (0:00:00.303) 0:00:40.545 ****** 2025-10-09 10:15:28.123041 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-10-09 10:15:28.123051 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-10-09 10:15:28.123062 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-10-09 10:15:28.123073 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-10-09 10:15:28.123084 | orchestrator | 2025-10-09 10:15:28.123095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.123106 | orchestrator | Thursday 09 October 2025 10:15:27 +0000 (0:00:00.727) 0:00:41.273 ****** 2025-10-09 10:15:28.123116 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.123127 | orchestrator | 2025-10-09 10:15:28.123138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.123149 | orchestrator | Thursday 09 October 2025 10:15:27 +0000 (0:00:00.222) 0:00:41.496 ****** 2025-10-09 10:15:28.123166 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.123177 | orchestrator | 2025-10-09 10:15:28.123188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.123199 | orchestrator | Thursday 09 October 2025 10:15:27 +0000 (0:00:00.241) 0:00:41.738 ****** 2025-10-09 10:15:28.123210 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.123221 | orchestrator | 2025-10-09 10:15:28.123232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:15:28.123243 | orchestrator | Thursday 09 October 2025 10:15:27 +0000 (0:00:00.232) 0:00:41.970 ****** 2025-10-09 10:15:28.123260 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:28.123271 | orchestrator | 2025-10-09 10:15:28.123282 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-09 10:15:28.123299 | orchestrator | Thursday 09 October 2025 10:15:28 +0000 (0:00:00.325) 0:00:42.295 ****** 2025-10-09 10:15:32.775240 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-10-09 10:15:32.775353 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-10-09 10:15:32.775370 | orchestrator | 2025-10-09 10:15:32.775383 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-09 10:15:32.775394 | orchestrator | Thursday 09 October 2025 10:15:28 +0000 (0:00:00.259) 0:00:42.555 ****** 2025-10-09 10:15:32.775405 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.775417 | orchestrator | 2025-10-09 10:15:32.775428 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-09 10:15:32.775439 | orchestrator | Thursday 09 October 2025 10:15:28 +0000 (0:00:00.137) 0:00:42.693 ****** 2025-10-09 10:15:32.775450 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.775460 | orchestrator | 2025-10-09 10:15:32.775471 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-09 10:15:32.775482 | orchestrator | Thursday 09 October 2025 10:15:28 +0000 (0:00:00.150) 0:00:42.843 ****** 2025-10-09 10:15:32.775492 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.775503 | orchestrator | 2025-10-09 10:15:32.775514 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-09 10:15:32.775525 | orchestrator | Thursday 09 October 2025 10:15:28 +0000 (0:00:00.130) 0:00:42.973 ****** 2025-10-09 10:15:32.775536 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:15:32.775547 | orchestrator | 2025-10-09 10:15:32.775558 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-09 10:15:32.775569 | orchestrator | Thursday 09 October 2025 10:15:29 +0000 (0:00:00.370) 0:00:43.343 ****** 2025-10-09 10:15:32.775580 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '858e0050-0995-591a-8058-d3b9e3e91008'}}) 2025-10-09 10:15:32.775592 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a727a7b-5e51-5ff8-a4b5-d0215765e108'}}) 2025-10-09 10:15:32.775603 | orchestrator | 2025-10-09 10:15:32.775614 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-09 10:15:32.775625 | orchestrator | Thursday 09 October 2025 10:15:29 +0000 (0:00:00.193) 0:00:43.537 ****** 2025-10-09 10:15:32.775636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '858e0050-0995-591a-8058-d3b9e3e91008'}})  2025-10-09 10:15:32.775649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a727a7b-5e51-5ff8-a4b5-d0215765e108'}})  2025-10-09 10:15:32.775660 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.775670 | orchestrator | 2025-10-09 10:15:32.775697 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-09 10:15:32.775709 | orchestrator | Thursday 09 October 2025 10:15:29 +0000 (0:00:00.178) 0:00:43.716 ****** 2025-10-09 10:15:32.775721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '858e0050-0995-591a-8058-d3b9e3e91008'}})  2025-10-09 10:15:32.775732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a727a7b-5e51-5ff8-a4b5-d0215765e108'}})  2025-10-09 10:15:32.775765 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.775776 | orchestrator | 2025-10-09 10:15:32.775787 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-09 10:15:32.775798 | orchestrator | Thursday 09 October 2025 10:15:29 +0000 (0:00:00.186) 0:00:43.902 ****** 2025-10-09 10:15:32.775808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '858e0050-0995-591a-8058-d3b9e3e91008'}})  2025-10-09 10:15:32.775819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a727a7b-5e51-5ff8-a4b5-d0215765e108'}})  2025-10-09 10:15:32.775830 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.775840 | orchestrator | 2025-10-09 10:15:32.775851 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-09 10:15:32.775862 | orchestrator | Thursday 09 October 2025 10:15:29 +0000 (0:00:00.182) 0:00:44.085 ****** 2025-10-09 10:15:32.775873 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:15:32.775883 | orchestrator | 2025-10-09 10:15:32.775894 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-09 10:15:32.775905 | orchestrator | Thursday 09 October 2025 10:15:30 +0000 (0:00:00.146) 0:00:44.232 ****** 2025-10-09 10:15:32.775915 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:15:32.775926 | orchestrator | 2025-10-09 10:15:32.775936 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-09 10:15:32.775947 | orchestrator | Thursday 09 October 2025 10:15:30 +0000 (0:00:00.155) 0:00:44.387 ****** 2025-10-09 10:15:32.775958 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.775968 | orchestrator | 2025-10-09 10:15:32.775979 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-09 10:15:32.775990 | orchestrator | Thursday 09 October 2025 10:15:30 +0000 (0:00:00.140) 0:00:44.527 ****** 2025-10-09 10:15:32.776000 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.776011 | orchestrator | 2025-10-09 10:15:32.776022 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-09 10:15:32.776032 | orchestrator | Thursday 09 October 2025 10:15:30 +0000 (0:00:00.140) 0:00:44.668 ****** 2025-10-09 10:15:32.776043 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.776053 | orchestrator | 2025-10-09 10:15:32.776064 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-09 10:15:32.776075 | orchestrator | Thursday 09 October 2025 10:15:30 +0000 (0:00:00.159) 0:00:44.827 ****** 2025-10-09 10:15:32.776085 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:15:32.776096 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:15:32.776107 | orchestrator |  "sdb": { 2025-10-09 10:15:32.776118 | orchestrator |  "osd_lvm_uuid": "858e0050-0995-591a-8058-d3b9e3e91008" 2025-10-09 10:15:32.776145 | orchestrator |  }, 2025-10-09 10:15:32.776157 | orchestrator |  "sdc": { 2025-10-09 10:15:32.776168 | orchestrator |  "osd_lvm_uuid": "7a727a7b-5e51-5ff8-a4b5-d0215765e108" 2025-10-09 10:15:32.776179 | orchestrator |  } 2025-10-09 10:15:32.776189 | orchestrator |  } 2025-10-09 10:15:32.776200 | orchestrator | } 2025-10-09 10:15:32.776211 | orchestrator | 2025-10-09 10:15:32.776222 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-09 10:15:32.776233 | orchestrator | Thursday 09 October 2025 10:15:30 +0000 (0:00:00.146) 0:00:44.974 ****** 2025-10-09 10:15:32.776244 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.776254 | orchestrator | 2025-10-09 10:15:32.776265 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-09 10:15:32.776276 | orchestrator | Thursday 09 October 2025 10:15:30 +0000 (0:00:00.139) 0:00:45.114 ****** 2025-10-09 10:15:32.776286 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.776297 | orchestrator | 2025-10-09 10:15:32.776308 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-09 10:15:32.776406 | orchestrator | Thursday 09 October 2025 10:15:31 +0000 (0:00:00.379) 0:00:45.493 ****** 2025-10-09 10:15:32.776418 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:32.776429 | orchestrator | 2025-10-09 10:15:32.776440 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-09 10:15:32.776451 | orchestrator | Thursday 09 October 2025 10:15:31 +0000 (0:00:00.146) 0:00:45.640 ****** 2025-10-09 10:15:32.776461 | orchestrator | changed: [testbed-node-5] => { 2025-10-09 10:15:32.776472 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-09 10:15:32.776483 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:15:32.776494 | orchestrator |  "sdb": { 2025-10-09 10:15:32.776504 | orchestrator |  "osd_lvm_uuid": "858e0050-0995-591a-8058-d3b9e3e91008" 2025-10-09 10:15:32.776515 | orchestrator |  }, 2025-10-09 10:15:32.776526 | orchestrator |  "sdc": { 2025-10-09 10:15:32.776536 | orchestrator |  "osd_lvm_uuid": "7a727a7b-5e51-5ff8-a4b5-d0215765e108" 2025-10-09 10:15:32.776547 | orchestrator |  } 2025-10-09 10:15:32.776558 | orchestrator |  }, 2025-10-09 10:15:32.776568 | orchestrator |  "lvm_volumes": [ 2025-10-09 10:15:32.776579 | orchestrator |  { 2025-10-09 10:15:32.776590 | orchestrator |  "data": "osd-block-858e0050-0995-591a-8058-d3b9e3e91008", 2025-10-09 10:15:32.776600 | orchestrator |  "data_vg": "ceph-858e0050-0995-591a-8058-d3b9e3e91008" 2025-10-09 10:15:32.776611 | orchestrator |  }, 2025-10-09 10:15:32.776622 | orchestrator |  { 2025-10-09 10:15:32.776632 | orchestrator |  "data": "osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108", 2025-10-09 10:15:32.776643 | orchestrator |  "data_vg": "ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108" 2025-10-09 10:15:32.776654 | orchestrator |  } 2025-10-09 10:15:32.776664 | orchestrator |  ] 2025-10-09 10:15:32.776675 | orchestrator |  } 2025-10-09 10:15:32.776686 | orchestrator | } 2025-10-09 10:15:32.776701 | orchestrator | 2025-10-09 10:15:32.776712 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-09 10:15:32.776723 | orchestrator | Thursday 09 October 2025 10:15:31 +0000 (0:00:00.219) 0:00:45.859 ****** 2025-10-09 10:15:32.776734 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-09 10:15:32.776745 | orchestrator | 2025-10-09 10:15:32.776756 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:15:32.776776 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:15:32.776788 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:15:32.776799 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:15:32.776810 | orchestrator | 2025-10-09 10:15:32.776820 | orchestrator | 2025-10-09 10:15:32.776831 | orchestrator | 2025-10-09 10:15:32.776842 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:15:32.776853 | orchestrator | Thursday 09 October 2025 10:15:32 +0000 (0:00:01.063) 0:00:46.923 ****** 2025-10-09 10:15:32.776863 | orchestrator | =============================================================================== 2025-10-09 10:15:32.776874 | orchestrator | Write configuration file ------------------------------------------------ 4.27s 2025-10-09 10:15:32.776885 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.48s 2025-10-09 10:15:32.776895 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2025-10-09 10:15:32.776906 | orchestrator | Add known partitions to the list of available block devices ------------- 1.27s 2025-10-09 10:15:32.776917 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-10-09 10:15:32.776927 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2025-10-09 10:15:32.776945 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2025-10-09 10:15:32.776956 | orchestrator | Print configuration data ------------------------------------------------ 0.93s 2025-10-09 10:15:32.776967 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2025-10-09 10:15:32.776977 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2025-10-09 10:15:32.776988 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-10-09 10:15:32.776999 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-10-09 10:15:32.777010 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-10-09 10:15:32.777020 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.73s 2025-10-09 10:15:32.777039 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-10-09 10:15:33.185728 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-10-09 10:15:33.185808 | orchestrator | Set DB devices config data ---------------------------------------------- 0.71s 2025-10-09 10:15:33.185819 | orchestrator | Print DB devices -------------------------------------------------------- 0.69s 2025-10-09 10:15:33.185830 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.66s 2025-10-09 10:15:33.185842 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-10-09 10:15:55.907927 | orchestrator | 2025-10-09 10:15:55 | INFO  | Task 5bea70d7-ee2a-4bf9-81de-81fc46287e6a (sync inventory) is running in background. Output coming soon. 2025-10-09 10:16:26.090604 | orchestrator | 2025-10-09 10:15:57 | INFO  | Starting group_vars file reorganization 2025-10-09 10:16:26.090700 | orchestrator | 2025-10-09 10:15:57 | INFO  | Moved 0 file(s) to their respective directories 2025-10-09 10:16:26.090715 | orchestrator | 2025-10-09 10:15:57 | INFO  | Group_vars file reorganization completed 2025-10-09 10:16:26.090725 | orchestrator | 2025-10-09 10:16:00 | INFO  | Starting variable preparation from inventory 2025-10-09 10:16:26.090734 | orchestrator | 2025-10-09 10:16:04 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-10-09 10:16:26.090744 | orchestrator | 2025-10-09 10:16:04 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-10-09 10:16:26.090753 | orchestrator | 2025-10-09 10:16:04 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-10-09 10:16:26.090761 | orchestrator | 2025-10-09 10:16:04 | INFO  | 3 file(s) written, 6 host(s) processed 2025-10-09 10:16:26.090770 | orchestrator | 2025-10-09 10:16:04 | INFO  | Variable preparation completed 2025-10-09 10:16:26.090779 | orchestrator | 2025-10-09 10:16:06 | INFO  | Starting inventory overwrite handling 2025-10-09 10:16:26.090788 | orchestrator | 2025-10-09 10:16:06 | INFO  | Handling group overwrites in 99-overwrite 2025-10-09 10:16:26.090797 | orchestrator | 2025-10-09 10:16:06 | INFO  | Removing group frr:children from 60-generic 2025-10-09 10:16:26.090806 | orchestrator | 2025-10-09 10:16:06 | INFO  | Removing group storage:children from 50-kolla 2025-10-09 10:16:26.090815 | orchestrator | 2025-10-09 10:16:06 | INFO  | Removing group netbird:children from 50-infrastructure 2025-10-09 10:16:26.090824 | orchestrator | 2025-10-09 10:16:06 | INFO  | Removing group ceph-mds from 50-ceph 2025-10-09 10:16:26.090833 | orchestrator | 2025-10-09 10:16:06 | INFO  | Removing group ceph-rgw from 50-ceph 2025-10-09 10:16:26.090842 | orchestrator | 2025-10-09 10:16:06 | INFO  | Handling group overwrites in 20-roles 2025-10-09 10:16:26.090851 | orchestrator | 2025-10-09 10:16:06 | INFO  | Removing group k3s_node from 50-infrastructure 2025-10-09 10:16:26.090884 | orchestrator | 2025-10-09 10:16:06 | INFO  | Removed 6 group(s) in total 2025-10-09 10:16:26.090893 | orchestrator | 2025-10-09 10:16:06 | INFO  | Inventory overwrite handling completed 2025-10-09 10:16:26.090901 | orchestrator | 2025-10-09 10:16:07 | INFO  | Starting merge of inventory files 2025-10-09 10:16:26.090910 | orchestrator | 2025-10-09 10:16:07 | INFO  | Inventory files merged successfully 2025-10-09 10:16:26.090918 | orchestrator | 2025-10-09 10:16:12 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-10-09 10:16:26.090927 | orchestrator | 2025-10-09 10:16:24 | INFO  | Successfully wrote ClusterShell configuration 2025-10-09 10:16:26.090936 | orchestrator | [master 0dbf84f] 2025-10-09-10-16 2025-10-09 10:16:26.090946 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-10-09 10:16:28.630807 | orchestrator | 2025-10-09 10:16:28 | INFO  | Task e9791d08-d9d1-43a2-b6eb-a5481d7666a0 (ceph-create-lvm-devices) was prepared for execution. 2025-10-09 10:16:28.630930 | orchestrator | 2025-10-09 10:16:28 | INFO  | It takes a moment until task e9791d08-d9d1-43a2-b6eb-a5481d7666a0 (ceph-create-lvm-devices) has been started and output is visible here. 2025-10-09 10:16:42.878435 | orchestrator | 2025-10-09 10:16:42.878549 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-09 10:16:42.878565 | orchestrator | 2025-10-09 10:16:42.878577 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:16:42.878589 | orchestrator | Thursday 09 October 2025 10:16:33 +0000 (0:00:00.335) 0:00:00.335 ****** 2025-10-09 10:16:42.878601 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:16:42.878612 | orchestrator | 2025-10-09 10:16:42.878624 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:16:42.878635 | orchestrator | Thursday 09 October 2025 10:16:34 +0000 (0:00:00.292) 0:00:00.628 ****** 2025-10-09 10:16:42.878646 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:42.878659 | orchestrator | 2025-10-09 10:16:42.878670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.878681 | orchestrator | Thursday 09 October 2025 10:16:34 +0000 (0:00:00.314) 0:00:00.942 ****** 2025-10-09 10:16:42.878692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-10-09 10:16:42.878704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-10-09 10:16:42.878716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-10-09 10:16:42.878727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-10-09 10:16:42.878754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-10-09 10:16:42.878765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-10-09 10:16:42.878776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-10-09 10:16:42.878787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-10-09 10:16:42.878798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-10-09 10:16:42.878810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-10-09 10:16:42.878821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-10-09 10:16:42.878831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-10-09 10:16:42.878842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-10-09 10:16:42.878853 | orchestrator | 2025-10-09 10:16:42.878865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.878900 | orchestrator | Thursday 09 October 2025 10:16:35 +0000 (0:00:00.530) 0:00:01.473 ****** 2025-10-09 10:16:42.878913 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.878926 | orchestrator | 2025-10-09 10:16:42.878938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.878969 | orchestrator | Thursday 09 October 2025 10:16:35 +0000 (0:00:00.558) 0:00:02.031 ****** 2025-10-09 10:16:42.878982 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.878995 | orchestrator | 2025-10-09 10:16:42.879007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879019 | orchestrator | Thursday 09 October 2025 10:16:35 +0000 (0:00:00.227) 0:00:02.259 ****** 2025-10-09 10:16:42.879031 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879043 | orchestrator | 2025-10-09 10:16:42.879062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879075 | orchestrator | Thursday 09 October 2025 10:16:36 +0000 (0:00:00.277) 0:00:02.536 ****** 2025-10-09 10:16:42.879087 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879099 | orchestrator | 2025-10-09 10:16:42.879111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879123 | orchestrator | Thursday 09 October 2025 10:16:36 +0000 (0:00:00.219) 0:00:02.755 ****** 2025-10-09 10:16:42.879136 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879148 | orchestrator | 2025-10-09 10:16:42.879160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879172 | orchestrator | Thursday 09 October 2025 10:16:36 +0000 (0:00:00.219) 0:00:02.974 ****** 2025-10-09 10:16:42.879185 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879197 | orchestrator | 2025-10-09 10:16:42.879209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879220 | orchestrator | Thursday 09 October 2025 10:16:36 +0000 (0:00:00.272) 0:00:03.247 ****** 2025-10-09 10:16:42.879232 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879245 | orchestrator | 2025-10-09 10:16:42.879258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879270 | orchestrator | Thursday 09 October 2025 10:16:37 +0000 (0:00:00.251) 0:00:03.498 ****** 2025-10-09 10:16:42.879281 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879292 | orchestrator | 2025-10-09 10:16:42.879303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879336 | orchestrator | Thursday 09 October 2025 10:16:37 +0000 (0:00:00.222) 0:00:03.720 ****** 2025-10-09 10:16:42.879348 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f) 2025-10-09 10:16:42.879361 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f) 2025-10-09 10:16:42.879372 | orchestrator | 2025-10-09 10:16:42.879383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879394 | orchestrator | Thursday 09 October 2025 10:16:37 +0000 (0:00:00.480) 0:00:04.201 ****** 2025-10-09 10:16:42.879424 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16) 2025-10-09 10:16:42.879436 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16) 2025-10-09 10:16:42.879447 | orchestrator | 2025-10-09 10:16:42.879458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879469 | orchestrator | Thursday 09 October 2025 10:16:38 +0000 (0:00:00.726) 0:00:04.928 ****** 2025-10-09 10:16:42.879480 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057) 2025-10-09 10:16:42.879491 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057) 2025-10-09 10:16:42.879502 | orchestrator | 2025-10-09 10:16:42.879513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879532 | orchestrator | Thursday 09 October 2025 10:16:39 +0000 (0:00:00.708) 0:00:05.636 ****** 2025-10-09 10:16:42.879543 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2) 2025-10-09 10:16:42.879554 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2) 2025-10-09 10:16:42.879565 | orchestrator | 2025-10-09 10:16:42.879576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:42.879586 | orchestrator | Thursday 09 October 2025 10:16:40 +0000 (0:00:01.152) 0:00:06.788 ****** 2025-10-09 10:16:42.879597 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:16:42.879608 | orchestrator | 2025-10-09 10:16:42.879619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.879630 | orchestrator | Thursday 09 October 2025 10:16:40 +0000 (0:00:00.349) 0:00:07.138 ****** 2025-10-09 10:16:42.879641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-10-09 10:16:42.879651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-10-09 10:16:42.879662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-10-09 10:16:42.879673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-10-09 10:16:42.879684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-10-09 10:16:42.879694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-10-09 10:16:42.879705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-10-09 10:16:42.879716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-10-09 10:16:42.879726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-10-09 10:16:42.879737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-10-09 10:16:42.879748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-10-09 10:16:42.879758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-10-09 10:16:42.879769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-10-09 10:16:42.879780 | orchestrator | 2025-10-09 10:16:42.879791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.879801 | orchestrator | Thursday 09 October 2025 10:16:41 +0000 (0:00:00.475) 0:00:07.614 ****** 2025-10-09 10:16:42.879812 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879823 | orchestrator | 2025-10-09 10:16:42.879834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.879845 | orchestrator | Thursday 09 October 2025 10:16:41 +0000 (0:00:00.197) 0:00:07.811 ****** 2025-10-09 10:16:42.879855 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879867 | orchestrator | 2025-10-09 10:16:42.879878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.879888 | orchestrator | Thursday 09 October 2025 10:16:41 +0000 (0:00:00.220) 0:00:08.031 ****** 2025-10-09 10:16:42.879899 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879910 | orchestrator | 2025-10-09 10:16:42.879921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.879932 | orchestrator | Thursday 09 October 2025 10:16:41 +0000 (0:00:00.205) 0:00:08.237 ****** 2025-10-09 10:16:42.879943 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.879954 | orchestrator | 2025-10-09 10:16:42.879965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.879976 | orchestrator | Thursday 09 October 2025 10:16:41 +0000 (0:00:00.232) 0:00:08.469 ****** 2025-10-09 10:16:42.879992 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.880003 | orchestrator | 2025-10-09 10:16:42.880014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.880025 | orchestrator | Thursday 09 October 2025 10:16:42 +0000 (0:00:00.207) 0:00:08.676 ****** 2025-10-09 10:16:42.880036 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.880046 | orchestrator | 2025-10-09 10:16:42.880057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.880068 | orchestrator | Thursday 09 October 2025 10:16:42 +0000 (0:00:00.227) 0:00:08.903 ****** 2025-10-09 10:16:42.880079 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:42.880090 | orchestrator | 2025-10-09 10:16:42.880101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:42.880112 | orchestrator | Thursday 09 October 2025 10:16:42 +0000 (0:00:00.222) 0:00:09.126 ****** 2025-10-09 10:16:42.880129 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.385363 | orchestrator | 2025-10-09 10:16:51.385450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:51.385463 | orchestrator | Thursday 09 October 2025 10:16:42 +0000 (0:00:00.210) 0:00:09.336 ****** 2025-10-09 10:16:51.385472 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-10-09 10:16:51.385482 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-10-09 10:16:51.385491 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-10-09 10:16:51.385500 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-10-09 10:16:51.385509 | orchestrator | 2025-10-09 10:16:51.385518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:51.385527 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:01.174) 0:00:10.511 ****** 2025-10-09 10:16:51.385535 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.385544 | orchestrator | 2025-10-09 10:16:51.385553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:51.385561 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:00.301) 0:00:10.813 ****** 2025-10-09 10:16:51.385570 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.385578 | orchestrator | 2025-10-09 10:16:51.385587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:51.385595 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:00.218) 0:00:11.031 ****** 2025-10-09 10:16:51.385604 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.385613 | orchestrator | 2025-10-09 10:16:51.385621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:51.385630 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:00.224) 0:00:11.256 ****** 2025-10-09 10:16:51.385639 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.385647 | orchestrator | 2025-10-09 10:16:51.385656 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-09 10:16:51.385664 | orchestrator | Thursday 09 October 2025 10:16:45 +0000 (0:00:00.219) 0:00:11.475 ****** 2025-10-09 10:16:51.385673 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.385682 | orchestrator | 2025-10-09 10:16:51.385690 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-09 10:16:51.385699 | orchestrator | Thursday 09 October 2025 10:16:45 +0000 (0:00:00.159) 0:00:11.635 ****** 2025-10-09 10:16:51.385708 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54591ae8-508f-52be-a28f-431401b6d1ad'}}) 2025-10-09 10:16:51.385717 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4e7e843-affd-596c-a641-84f7bc574f86'}}) 2025-10-09 10:16:51.385725 | orchestrator | 2025-10-09 10:16:51.385734 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-09 10:16:51.385742 | orchestrator | Thursday 09 October 2025 10:16:45 +0000 (0:00:00.212) 0:00:11.847 ****** 2025-10-09 10:16:51.385752 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'}) 2025-10-09 10:16:51.385797 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'}) 2025-10-09 10:16:51.385807 | orchestrator | 2025-10-09 10:16:51.385840 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-09 10:16:51.385849 | orchestrator | Thursday 09 October 2025 10:16:47 +0000 (0:00:02.027) 0:00:13.874 ****** 2025-10-09 10:16:51.385864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:51.385874 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:51.385883 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.385891 | orchestrator | 2025-10-09 10:16:51.385900 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-09 10:16:51.385908 | orchestrator | Thursday 09 October 2025 10:16:47 +0000 (0:00:00.141) 0:00:14.016 ****** 2025-10-09 10:16:51.385917 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'}) 2025-10-09 10:16:51.385925 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'}) 2025-10-09 10:16:51.385934 | orchestrator | 2025-10-09 10:16:51.385942 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-09 10:16:51.385950 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:01.511) 0:00:15.528 ****** 2025-10-09 10:16:51.385959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:51.385967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:51.385977 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.385985 | orchestrator | 2025-10-09 10:16:51.385994 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-09 10:16:51.386003 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:00.178) 0:00:15.707 ****** 2025-10-09 10:16:51.386011 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386062 | orchestrator | 2025-10-09 10:16:51.386072 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-09 10:16:51.386095 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:00.147) 0:00:15.854 ****** 2025-10-09 10:16:51.386104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:51.386113 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:51.386122 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386130 | orchestrator | 2025-10-09 10:16:51.386139 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-09 10:16:51.386148 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:00.394) 0:00:16.249 ****** 2025-10-09 10:16:51.386156 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386165 | orchestrator | 2025-10-09 10:16:51.386173 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-09 10:16:51.386182 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:00.163) 0:00:16.413 ****** 2025-10-09 10:16:51.386190 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:51.386206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:51.386215 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386224 | orchestrator | 2025-10-09 10:16:51.386232 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-09 10:16:51.386241 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.175) 0:00:16.588 ****** 2025-10-09 10:16:51.386249 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386258 | orchestrator | 2025-10-09 10:16:51.386266 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-09 10:16:51.386275 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.168) 0:00:16.757 ****** 2025-10-09 10:16:51.386283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:51.386292 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:51.386301 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386310 | orchestrator | 2025-10-09 10:16:51.386337 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-09 10:16:51.386346 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.155) 0:00:16.912 ****** 2025-10-09 10:16:51.386354 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:51.386363 | orchestrator | 2025-10-09 10:16:51.386372 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-09 10:16:51.386380 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.141) 0:00:17.053 ****** 2025-10-09 10:16:51.386389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:51.386402 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:51.386411 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386420 | orchestrator | 2025-10-09 10:16:51.386428 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-09 10:16:51.386437 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.156) 0:00:17.209 ****** 2025-10-09 10:16:51.386445 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:51.386454 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:51.386463 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386471 | orchestrator | 2025-10-09 10:16:51.386480 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-09 10:16:51.386488 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.180) 0:00:17.390 ****** 2025-10-09 10:16:51.386497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:51.386505 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:51.386514 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386523 | orchestrator | 2025-10-09 10:16:51.386531 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-09 10:16:51.386540 | orchestrator | Thursday 09 October 2025 10:16:51 +0000 (0:00:00.162) 0:00:17.552 ****** 2025-10-09 10:16:51.386549 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386557 | orchestrator | 2025-10-09 10:16:51.386566 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-09 10:16:51.386589 | orchestrator | Thursday 09 October 2025 10:16:51 +0000 (0:00:00.153) 0:00:17.706 ****** 2025-10-09 10:16:51.386597 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:51.386606 | orchestrator | 2025-10-09 10:16:51.386619 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-09 10:16:58.015879 | orchestrator | Thursday 09 October 2025 10:16:51 +0000 (0:00:00.141) 0:00:17.847 ****** 2025-10-09 10:16:58.015986 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016002 | orchestrator | 2025-10-09 10:16:58.016014 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-09 10:16:58.016026 | orchestrator | Thursday 09 October 2025 10:16:51 +0000 (0:00:00.135) 0:00:17.983 ****** 2025-10-09 10:16:58.016037 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:16:58.016049 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-09 10:16:58.016060 | orchestrator | } 2025-10-09 10:16:58.016071 | orchestrator | 2025-10-09 10:16:58.016083 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-09 10:16:58.016094 | orchestrator | Thursday 09 October 2025 10:16:51 +0000 (0:00:00.356) 0:00:18.339 ****** 2025-10-09 10:16:58.016105 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:16:58.016116 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-09 10:16:58.016127 | orchestrator | } 2025-10-09 10:16:58.016138 | orchestrator | 2025-10-09 10:16:58.016149 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-09 10:16:58.016160 | orchestrator | Thursday 09 October 2025 10:16:52 +0000 (0:00:00.184) 0:00:18.524 ****** 2025-10-09 10:16:58.016171 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:16:58.016182 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-09 10:16:58.016193 | orchestrator | } 2025-10-09 10:16:58.016204 | orchestrator | 2025-10-09 10:16:58.016216 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-09 10:16:58.016227 | orchestrator | Thursday 09 October 2025 10:16:52 +0000 (0:00:00.140) 0:00:18.664 ****** 2025-10-09 10:16:58.016238 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:58.016249 | orchestrator | 2025-10-09 10:16:58.016260 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-09 10:16:58.016271 | orchestrator | Thursday 09 October 2025 10:16:52 +0000 (0:00:00.697) 0:00:19.362 ****** 2025-10-09 10:16:58.016282 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:58.016293 | orchestrator | 2025-10-09 10:16:58.016304 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-09 10:16:58.016364 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.509) 0:00:19.871 ****** 2025-10-09 10:16:58.016376 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:58.016387 | orchestrator | 2025-10-09 10:16:58.016398 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-09 10:16:58.016409 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.509) 0:00:20.381 ****** 2025-10-09 10:16:58.016420 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:58.016433 | orchestrator | 2025-10-09 10:16:58.016446 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-09 10:16:58.016459 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.149) 0:00:20.530 ****** 2025-10-09 10:16:58.016471 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016484 | orchestrator | 2025-10-09 10:16:58.016497 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-09 10:16:58.016509 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.107) 0:00:20.637 ****** 2025-10-09 10:16:58.016521 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016533 | orchestrator | 2025-10-09 10:16:58.016546 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-09 10:16:58.016558 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.109) 0:00:20.747 ****** 2025-10-09 10:16:58.016571 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:16:58.016605 | orchestrator |  "vgs_report": { 2025-10-09 10:16:58.016618 | orchestrator |  "vg": [] 2025-10-09 10:16:58.016630 | orchestrator |  } 2025-10-09 10:16:58.016643 | orchestrator | } 2025-10-09 10:16:58.016656 | orchestrator | 2025-10-09 10:16:58.016668 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-09 10:16:58.016681 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.139) 0:00:20.886 ****** 2025-10-09 10:16:58.016693 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016704 | orchestrator | 2025-10-09 10:16:58.016716 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-09 10:16:58.016728 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.160) 0:00:21.046 ****** 2025-10-09 10:16:58.016740 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016753 | orchestrator | 2025-10-09 10:16:58.016765 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-09 10:16:58.016777 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.138) 0:00:21.184 ****** 2025-10-09 10:16:58.016788 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016798 | orchestrator | 2025-10-09 10:16:58.016809 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-09 10:16:58.016820 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.368) 0:00:21.553 ****** 2025-10-09 10:16:58.016830 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016841 | orchestrator | 2025-10-09 10:16:58.016851 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-09 10:16:58.016862 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.163) 0:00:21.717 ****** 2025-10-09 10:16:58.016873 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016883 | orchestrator | 2025-10-09 10:16:58.016912 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-09 10:16:58.016923 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.134) 0:00:21.852 ****** 2025-10-09 10:16:58.016934 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016945 | orchestrator | 2025-10-09 10:16:58.016956 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-09 10:16:58.016966 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.143) 0:00:21.995 ****** 2025-10-09 10:16:58.016977 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.016988 | orchestrator | 2025-10-09 10:16:58.016998 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-09 10:16:58.017009 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.152) 0:00:22.147 ****** 2025-10-09 10:16:58.017019 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017030 | orchestrator | 2025-10-09 10:16:58.017041 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-09 10:16:58.017071 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.120) 0:00:22.268 ****** 2025-10-09 10:16:58.017082 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017093 | orchestrator | 2025-10-09 10:16:58.017104 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-09 10:16:58.017115 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.135) 0:00:22.403 ****** 2025-10-09 10:16:58.017126 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017137 | orchestrator | 2025-10-09 10:16:58.017148 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-09 10:16:58.017158 | orchestrator | Thursday 09 October 2025 10:16:56 +0000 (0:00:00.149) 0:00:22.553 ****** 2025-10-09 10:16:58.017170 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017180 | orchestrator | 2025-10-09 10:16:58.017191 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-09 10:16:58.017202 | orchestrator | Thursday 09 October 2025 10:16:56 +0000 (0:00:00.149) 0:00:22.703 ****** 2025-10-09 10:16:58.017213 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017224 | orchestrator | 2025-10-09 10:16:58.017235 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-09 10:16:58.017255 | orchestrator | Thursday 09 October 2025 10:16:56 +0000 (0:00:00.129) 0:00:22.833 ****** 2025-10-09 10:16:58.017266 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017277 | orchestrator | 2025-10-09 10:16:58.017288 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-09 10:16:58.017299 | orchestrator | Thursday 09 October 2025 10:16:56 +0000 (0:00:00.140) 0:00:22.974 ****** 2025-10-09 10:16:58.017310 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017338 | orchestrator | 2025-10-09 10:16:58.017349 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-09 10:16:58.017360 | orchestrator | Thursday 09 October 2025 10:16:56 +0000 (0:00:00.132) 0:00:23.106 ****** 2025-10-09 10:16:58.017373 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:58.017385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:58.017396 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017407 | orchestrator | 2025-10-09 10:16:58.017419 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-09 10:16:58.017430 | orchestrator | Thursday 09 October 2025 10:16:57 +0000 (0:00:00.396) 0:00:23.503 ****** 2025-10-09 10:16:58.017441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:58.017452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:58.017463 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017474 | orchestrator | 2025-10-09 10:16:58.017485 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-09 10:16:58.017496 | orchestrator | Thursday 09 October 2025 10:16:57 +0000 (0:00:00.168) 0:00:23.671 ****** 2025-10-09 10:16:58.017512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:58.017523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:58.017534 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017545 | orchestrator | 2025-10-09 10:16:58.017556 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-09 10:16:58.017567 | orchestrator | Thursday 09 October 2025 10:16:57 +0000 (0:00:00.207) 0:00:23.878 ****** 2025-10-09 10:16:58.017578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:58.017589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:58.017600 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017611 | orchestrator | 2025-10-09 10:16:58.017622 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-09 10:16:58.017633 | orchestrator | Thursday 09 October 2025 10:16:57 +0000 (0:00:00.181) 0:00:24.060 ****** 2025-10-09 10:16:58.017644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:58.017655 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:16:58.017666 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.017677 | orchestrator | 2025-10-09 10:16:58.017688 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-09 10:16:58.017705 | orchestrator | Thursday 09 October 2025 10:16:57 +0000 (0:00:00.241) 0:00:24.302 ****** 2025-10-09 10:16:58.017716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:16:58.017734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:17:04.006511 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:17:04.006624 | orchestrator | 2025-10-09 10:17:04.006640 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-09 10:17:04.006653 | orchestrator | Thursday 09 October 2025 10:16:58 +0000 (0:00:00.173) 0:00:24.476 ****** 2025-10-09 10:17:04.006664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:17:04.006678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:17:04.006689 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:17:04.006700 | orchestrator | 2025-10-09 10:17:04.006711 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-09 10:17:04.006722 | orchestrator | Thursday 09 October 2025 10:16:58 +0000 (0:00:00.186) 0:00:24.662 ****** 2025-10-09 10:17:04.006740 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:17:04.006761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:17:04.006781 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:17:04.006801 | orchestrator | 2025-10-09 10:17:04.006822 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-09 10:17:04.006841 | orchestrator | Thursday 09 October 2025 10:16:58 +0000 (0:00:00.182) 0:00:24.845 ****** 2025-10-09 10:17:04.006861 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:17:04.006877 | orchestrator | 2025-10-09 10:17:04.006891 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-09 10:17:04.006909 | orchestrator | Thursday 09 October 2025 10:16:58 +0000 (0:00:00.502) 0:00:25.347 ****** 2025-10-09 10:17:04.006928 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:17:04.006948 | orchestrator | 2025-10-09 10:17:04.006966 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-09 10:17:04.006981 | orchestrator | Thursday 09 October 2025 10:16:59 +0000 (0:00:00.543) 0:00:25.891 ****** 2025-10-09 10:17:04.006992 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:17:04.007003 | orchestrator | 2025-10-09 10:17:04.007014 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-09 10:17:04.007025 | orchestrator | Thursday 09 October 2025 10:16:59 +0000 (0:00:00.183) 0:00:26.074 ****** 2025-10-09 10:17:04.007038 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'vg_name': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'}) 2025-10-09 10:17:04.007053 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'vg_name': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'}) 2025-10-09 10:17:04.007065 | orchestrator | 2025-10-09 10:17:04.007078 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-09 10:17:04.007091 | orchestrator | Thursday 09 October 2025 10:16:59 +0000 (0:00:00.185) 0:00:26.260 ****** 2025-10-09 10:17:04.007103 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:17:04.007116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:17:04.007155 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:17:04.007169 | orchestrator | 2025-10-09 10:17:04.007181 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-09 10:17:04.007194 | orchestrator | Thursday 09 October 2025 10:17:00 +0000 (0:00:00.397) 0:00:26.658 ****** 2025-10-09 10:17:04.007206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:17:04.007219 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:17:04.007232 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:17:04.007244 | orchestrator | 2025-10-09 10:17:04.007257 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-09 10:17:04.007270 | orchestrator | Thursday 09 October 2025 10:17:00 +0000 (0:00:00.184) 0:00:26.843 ****** 2025-10-09 10:17:04.007282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'})  2025-10-09 10:17:04.007296 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'})  2025-10-09 10:17:04.007309 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:17:04.007348 | orchestrator | 2025-10-09 10:17:04.007359 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-09 10:17:04.007370 | orchestrator | Thursday 09 October 2025 10:17:00 +0000 (0:00:00.173) 0:00:27.016 ****** 2025-10-09 10:17:04.007381 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:17:04.007392 | orchestrator |  "lvm_report": { 2025-10-09 10:17:04.007403 | orchestrator |  "lv": [ 2025-10-09 10:17:04.007414 | orchestrator |  { 2025-10-09 10:17:04.007443 | orchestrator |  "lv_name": "osd-block-54591ae8-508f-52be-a28f-431401b6d1ad", 2025-10-09 10:17:04.007456 | orchestrator |  "vg_name": "ceph-54591ae8-508f-52be-a28f-431401b6d1ad" 2025-10-09 10:17:04.007466 | orchestrator |  }, 2025-10-09 10:17:04.007477 | orchestrator |  { 2025-10-09 10:17:04.007488 | orchestrator |  "lv_name": "osd-block-b4e7e843-affd-596c-a641-84f7bc574f86", 2025-10-09 10:17:04.007498 | orchestrator |  "vg_name": "ceph-b4e7e843-affd-596c-a641-84f7bc574f86" 2025-10-09 10:17:04.007509 | orchestrator |  } 2025-10-09 10:17:04.007520 | orchestrator |  ], 2025-10-09 10:17:04.007531 | orchestrator |  "pv": [ 2025-10-09 10:17:04.007541 | orchestrator |  { 2025-10-09 10:17:04.007552 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-09 10:17:04.007563 | orchestrator |  "vg_name": "ceph-54591ae8-508f-52be-a28f-431401b6d1ad" 2025-10-09 10:17:04.007574 | orchestrator |  }, 2025-10-09 10:17:04.007585 | orchestrator |  { 2025-10-09 10:17:04.007595 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-09 10:17:04.007606 | orchestrator |  "vg_name": "ceph-b4e7e843-affd-596c-a641-84f7bc574f86" 2025-10-09 10:17:04.007617 | orchestrator |  } 2025-10-09 10:17:04.007627 | orchestrator |  ] 2025-10-09 10:17:04.007638 | orchestrator |  } 2025-10-09 10:17:04.007649 | orchestrator | } 2025-10-09 10:17:04.007660 | orchestrator | 2025-10-09 10:17:04.007671 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-09 10:17:04.007682 | orchestrator | 2025-10-09 10:17:04.007693 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:17:04.007704 | orchestrator | Thursday 09 October 2025 10:17:00 +0000 (0:00:00.287) 0:00:27.304 ****** 2025-10-09 10:17:04.007715 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-09 10:17:04.007726 | orchestrator | 2025-10-09 10:17:04.007744 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:17:04.007755 | orchestrator | Thursday 09 October 2025 10:17:01 +0000 (0:00:00.270) 0:00:27.575 ****** 2025-10-09 10:17:04.007766 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:04.007777 | orchestrator | 2025-10-09 10:17:04.007788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:04.007798 | orchestrator | Thursday 09 October 2025 10:17:01 +0000 (0:00:00.277) 0:00:27.852 ****** 2025-10-09 10:17:04.007830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-10-09 10:17:04.007841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-10-09 10:17:04.007852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-10-09 10:17:04.007863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-10-09 10:17:04.007873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-10-09 10:17:04.007884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-10-09 10:17:04.007895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-10-09 10:17:04.007906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-10-09 10:17:04.007921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-10-09 10:17:04.007932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-10-09 10:17:04.007943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-10-09 10:17:04.007954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-10-09 10:17:04.007964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-10-09 10:17:04.007975 | orchestrator | 2025-10-09 10:17:04.007986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:04.007997 | orchestrator | Thursday 09 October 2025 10:17:01 +0000 (0:00:00.466) 0:00:28.319 ****** 2025-10-09 10:17:04.008007 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:04.008018 | orchestrator | 2025-10-09 10:17:04.008029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:04.008039 | orchestrator | Thursday 09 October 2025 10:17:02 +0000 (0:00:00.229) 0:00:28.548 ****** 2025-10-09 10:17:04.008050 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:04.008061 | orchestrator | 2025-10-09 10:17:04.008071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:04.008082 | orchestrator | Thursday 09 October 2025 10:17:02 +0000 (0:00:00.269) 0:00:28.818 ****** 2025-10-09 10:17:04.008093 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:04.008103 | orchestrator | 2025-10-09 10:17:04.008114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:04.008125 | orchestrator | Thursday 09 October 2025 10:17:03 +0000 (0:00:00.684) 0:00:29.502 ****** 2025-10-09 10:17:04.008136 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:04.008146 | orchestrator | 2025-10-09 10:17:04.008157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:04.008168 | orchestrator | Thursday 09 October 2025 10:17:03 +0000 (0:00:00.228) 0:00:29.731 ****** 2025-10-09 10:17:04.008179 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:04.008189 | orchestrator | 2025-10-09 10:17:04.008200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:04.008211 | orchestrator | Thursday 09 October 2025 10:17:03 +0000 (0:00:00.263) 0:00:29.994 ****** 2025-10-09 10:17:04.008222 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:04.008232 | orchestrator | 2025-10-09 10:17:04.008243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:04.008260 | orchestrator | Thursday 09 October 2025 10:17:03 +0000 (0:00:00.240) 0:00:30.235 ****** 2025-10-09 10:17:04.008271 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:04.008282 | orchestrator | 2025-10-09 10:17:04.008300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:14.903162 | orchestrator | Thursday 09 October 2025 10:17:03 +0000 (0:00:00.233) 0:00:30.468 ****** 2025-10-09 10:17:14.903278 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.903294 | orchestrator | 2025-10-09 10:17:14.903308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:14.903370 | orchestrator | Thursday 09 October 2025 10:17:04 +0000 (0:00:00.214) 0:00:30.683 ****** 2025-10-09 10:17:14.903382 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e) 2025-10-09 10:17:14.903395 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e) 2025-10-09 10:17:14.903406 | orchestrator | 2025-10-09 10:17:14.903417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:14.903428 | orchestrator | Thursday 09 October 2025 10:17:04 +0000 (0:00:00.420) 0:00:31.103 ****** 2025-10-09 10:17:14.903439 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b) 2025-10-09 10:17:14.903450 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b) 2025-10-09 10:17:14.903460 | orchestrator | 2025-10-09 10:17:14.903471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:14.903482 | orchestrator | Thursday 09 October 2025 10:17:05 +0000 (0:00:00.432) 0:00:31.536 ****** 2025-10-09 10:17:14.903493 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c) 2025-10-09 10:17:14.903504 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c) 2025-10-09 10:17:14.903514 | orchestrator | 2025-10-09 10:17:14.903525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:14.903536 | orchestrator | Thursday 09 October 2025 10:17:05 +0000 (0:00:00.488) 0:00:32.024 ****** 2025-10-09 10:17:14.903546 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256) 2025-10-09 10:17:14.903557 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256) 2025-10-09 10:17:14.903568 | orchestrator | 2025-10-09 10:17:14.903579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:14.903590 | orchestrator | Thursday 09 October 2025 10:17:06 +0000 (0:00:00.467) 0:00:32.491 ****** 2025-10-09 10:17:14.903600 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:17:14.903611 | orchestrator | 2025-10-09 10:17:14.903622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.903633 | orchestrator | Thursday 09 October 2025 10:17:06 +0000 (0:00:00.348) 0:00:32.839 ****** 2025-10-09 10:17:14.903644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-10-09 10:17:14.903675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-10-09 10:17:14.903689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-10-09 10:17:14.903701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-10-09 10:17:14.903713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-10-09 10:17:14.903725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-10-09 10:17:14.903736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-10-09 10:17:14.903770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-10-09 10:17:14.903783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-10-09 10:17:14.903794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-10-09 10:17:14.903806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-10-09 10:17:14.903819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-10-09 10:17:14.903831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-10-09 10:17:14.903843 | orchestrator | 2025-10-09 10:17:14.903855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.903867 | orchestrator | Thursday 09 October 2025 10:17:07 +0000 (0:00:00.643) 0:00:33.483 ****** 2025-10-09 10:17:14.903879 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.903890 | orchestrator | 2025-10-09 10:17:14.903903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.903915 | orchestrator | Thursday 09 October 2025 10:17:07 +0000 (0:00:00.224) 0:00:33.707 ****** 2025-10-09 10:17:14.903927 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.903939 | orchestrator | 2025-10-09 10:17:14.903951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.903963 | orchestrator | Thursday 09 October 2025 10:17:07 +0000 (0:00:00.255) 0:00:33.962 ****** 2025-10-09 10:17:14.903975 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.903987 | orchestrator | 2025-10-09 10:17:14.903999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904011 | orchestrator | Thursday 09 October 2025 10:17:07 +0000 (0:00:00.228) 0:00:34.191 ****** 2025-10-09 10:17:14.904023 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904034 | orchestrator | 2025-10-09 10:17:14.904063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904075 | orchestrator | Thursday 09 October 2025 10:17:07 +0000 (0:00:00.236) 0:00:34.427 ****** 2025-10-09 10:17:14.904086 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904096 | orchestrator | 2025-10-09 10:17:14.904107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904118 | orchestrator | Thursday 09 October 2025 10:17:08 +0000 (0:00:00.216) 0:00:34.644 ****** 2025-10-09 10:17:14.904129 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904140 | orchestrator | 2025-10-09 10:17:14.904150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904161 | orchestrator | Thursday 09 October 2025 10:17:08 +0000 (0:00:00.229) 0:00:34.874 ****** 2025-10-09 10:17:14.904171 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904182 | orchestrator | 2025-10-09 10:17:14.904193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904203 | orchestrator | Thursday 09 October 2025 10:17:08 +0000 (0:00:00.237) 0:00:35.111 ****** 2025-10-09 10:17:14.904214 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904225 | orchestrator | 2025-10-09 10:17:14.904236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904247 | orchestrator | Thursday 09 October 2025 10:17:08 +0000 (0:00:00.214) 0:00:35.325 ****** 2025-10-09 10:17:14.904257 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-10-09 10:17:14.904268 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-10-09 10:17:14.904279 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-10-09 10:17:14.904290 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-10-09 10:17:14.904301 | orchestrator | 2025-10-09 10:17:14.904332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904345 | orchestrator | Thursday 09 October 2025 10:17:09 +0000 (0:00:00.954) 0:00:36.279 ****** 2025-10-09 10:17:14.904366 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904377 | orchestrator | 2025-10-09 10:17:14.904388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904399 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.221) 0:00:36.500 ****** 2025-10-09 10:17:14.904410 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904420 | orchestrator | 2025-10-09 10:17:14.904431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904442 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.689) 0:00:37.190 ****** 2025-10-09 10:17:14.904452 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904463 | orchestrator | 2025-10-09 10:17:14.904474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:14.904485 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.213) 0:00:37.404 ****** 2025-10-09 10:17:14.904496 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904506 | orchestrator | 2025-10-09 10:17:14.904517 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-09 10:17:14.904528 | orchestrator | Thursday 09 October 2025 10:17:11 +0000 (0:00:00.220) 0:00:37.624 ****** 2025-10-09 10:17:14.904539 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904549 | orchestrator | 2025-10-09 10:17:14.904560 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-09 10:17:14.904571 | orchestrator | Thursday 09 October 2025 10:17:11 +0000 (0:00:00.147) 0:00:37.771 ****** 2025-10-09 10:17:14.904582 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}}) 2025-10-09 10:17:14.904593 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd550970-9f11-5339-ba56-252bcc7474b0'}}) 2025-10-09 10:17:14.904604 | orchestrator | 2025-10-09 10:17:14.904615 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-09 10:17:14.904626 | orchestrator | Thursday 09 October 2025 10:17:11 +0000 (0:00:00.210) 0:00:37.981 ****** 2025-10-09 10:17:14.904637 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}) 2025-10-09 10:17:14.904649 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'}) 2025-10-09 10:17:14.904660 | orchestrator | 2025-10-09 10:17:14.904671 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-09 10:17:14.904681 | orchestrator | Thursday 09 October 2025 10:17:13 +0000 (0:00:01.861) 0:00:39.843 ****** 2025-10-09 10:17:14.904692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:14.904704 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:14.904715 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:14.904726 | orchestrator | 2025-10-09 10:17:14.904737 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-09 10:17:14.904748 | orchestrator | Thursday 09 October 2025 10:17:13 +0000 (0:00:00.176) 0:00:40.019 ****** 2025-10-09 10:17:14.904758 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}) 2025-10-09 10:17:14.904769 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'}) 2025-10-09 10:17:14.904780 | orchestrator | 2025-10-09 10:17:14.904798 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-09 10:17:20.345682 | orchestrator | Thursday 09 October 2025 10:17:14 +0000 (0:00:01.342) 0:00:41.361 ****** 2025-10-09 10:17:20.345815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:20.345832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:20.345843 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.345856 | orchestrator | 2025-10-09 10:17:20.345868 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-09 10:17:20.345880 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.151) 0:00:41.513 ****** 2025-10-09 10:17:20.345891 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.345901 | orchestrator | 2025-10-09 10:17:20.345913 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-09 10:17:20.345923 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.134) 0:00:41.647 ****** 2025-10-09 10:17:20.345935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:20.345963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:20.345975 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.345986 | orchestrator | 2025-10-09 10:17:20.345997 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-09 10:17:20.346007 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.151) 0:00:41.798 ****** 2025-10-09 10:17:20.346069 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346082 | orchestrator | 2025-10-09 10:17:20.346093 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-09 10:17:20.346104 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.147) 0:00:41.946 ****** 2025-10-09 10:17:20.346115 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:20.346127 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:20.346137 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346148 | orchestrator | 2025-10-09 10:17:20.346159 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-09 10:17:20.346170 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.158) 0:00:42.105 ****** 2025-10-09 10:17:20.346181 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346192 | orchestrator | 2025-10-09 10:17:20.346210 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-09 10:17:20.346223 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.287) 0:00:42.392 ****** 2025-10-09 10:17:20.346236 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:20.346249 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:20.346261 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346273 | orchestrator | 2025-10-09 10:17:20.346285 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-09 10:17:20.346297 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.181) 0:00:42.573 ****** 2025-10-09 10:17:20.346332 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:20.346345 | orchestrator | 2025-10-09 10:17:20.346358 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-09 10:17:20.346369 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.159) 0:00:42.732 ****** 2025-10-09 10:17:20.346394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:20.346407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:20.346420 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346432 | orchestrator | 2025-10-09 10:17:20.346444 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-09 10:17:20.346456 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.158) 0:00:42.891 ****** 2025-10-09 10:17:20.346469 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:20.346482 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:20.346494 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346506 | orchestrator | 2025-10-09 10:17:20.346518 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-09 10:17:20.346530 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.155) 0:00:43.046 ****** 2025-10-09 10:17:20.346561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:20.346574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:20.346585 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346596 | orchestrator | 2025-10-09 10:17:20.346607 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-09 10:17:20.346617 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.148) 0:00:43.195 ****** 2025-10-09 10:17:20.346628 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346638 | orchestrator | 2025-10-09 10:17:20.346649 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-09 10:17:20.346660 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.142) 0:00:43.337 ****** 2025-10-09 10:17:20.346670 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346681 | orchestrator | 2025-10-09 10:17:20.346692 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-09 10:17:20.346702 | orchestrator | Thursday 09 October 2025 10:17:17 +0000 (0:00:00.148) 0:00:43.485 ****** 2025-10-09 10:17:20.346713 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.346724 | orchestrator | 2025-10-09 10:17:20.346735 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-09 10:17:20.346745 | orchestrator | Thursday 09 October 2025 10:17:17 +0000 (0:00:00.134) 0:00:43.620 ****** 2025-10-09 10:17:20.346756 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:17:20.346767 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-09 10:17:20.346778 | orchestrator | } 2025-10-09 10:17:20.346789 | orchestrator | 2025-10-09 10:17:20.346799 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-09 10:17:20.346810 | orchestrator | Thursday 09 October 2025 10:17:17 +0000 (0:00:00.137) 0:00:43.757 ****** 2025-10-09 10:17:20.346821 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:17:20.346831 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-09 10:17:20.346842 | orchestrator | } 2025-10-09 10:17:20.346852 | orchestrator | 2025-10-09 10:17:20.346863 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-09 10:17:20.346874 | orchestrator | Thursday 09 October 2025 10:17:17 +0000 (0:00:00.184) 0:00:43.942 ****** 2025-10-09 10:17:20.346885 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:17:20.346896 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-09 10:17:20.346907 | orchestrator | } 2025-10-09 10:17:20.346924 | orchestrator | 2025-10-09 10:17:20.346935 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-09 10:17:20.346946 | orchestrator | Thursday 09 October 2025 10:17:17 +0000 (0:00:00.134) 0:00:44.077 ****** 2025-10-09 10:17:20.346956 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:20.346967 | orchestrator | 2025-10-09 10:17:20.346978 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-09 10:17:20.346989 | orchestrator | Thursday 09 October 2025 10:17:18 +0000 (0:00:00.673) 0:00:44.750 ****** 2025-10-09 10:17:20.346999 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:20.347010 | orchestrator | 2025-10-09 10:17:20.347026 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-09 10:17:20.347036 | orchestrator | Thursday 09 October 2025 10:17:18 +0000 (0:00:00.506) 0:00:45.257 ****** 2025-10-09 10:17:20.347047 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:20.347058 | orchestrator | 2025-10-09 10:17:20.347069 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-09 10:17:20.347080 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.498) 0:00:45.756 ****** 2025-10-09 10:17:20.347091 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:20.347101 | orchestrator | 2025-10-09 10:17:20.347112 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-09 10:17:20.347123 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.143) 0:00:45.900 ****** 2025-10-09 10:17:20.347134 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.347145 | orchestrator | 2025-10-09 10:17:20.347155 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-09 10:17:20.347166 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.111) 0:00:46.011 ****** 2025-10-09 10:17:20.347176 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.347187 | orchestrator | 2025-10-09 10:17:20.347198 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-09 10:17:20.347208 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.104) 0:00:46.116 ****** 2025-10-09 10:17:20.347219 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:17:20.347230 | orchestrator |  "vgs_report": { 2025-10-09 10:17:20.347241 | orchestrator |  "vg": [] 2025-10-09 10:17:20.347252 | orchestrator |  } 2025-10-09 10:17:20.347262 | orchestrator | } 2025-10-09 10:17:20.347273 | orchestrator | 2025-10-09 10:17:20.347283 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-09 10:17:20.347294 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.150) 0:00:46.266 ****** 2025-10-09 10:17:20.347305 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.347333 | orchestrator | 2025-10-09 10:17:20.347344 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-09 10:17:20.347355 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.133) 0:00:46.400 ****** 2025-10-09 10:17:20.347366 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.347377 | orchestrator | 2025-10-09 10:17:20.347387 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-09 10:17:20.347398 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.144) 0:00:46.545 ****** 2025-10-09 10:17:20.347409 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.347420 | orchestrator | 2025-10-09 10:17:20.347431 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-09 10:17:20.347442 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.141) 0:00:46.686 ****** 2025-10-09 10:17:20.347453 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:20.347464 | orchestrator | 2025-10-09 10:17:20.347474 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-09 10:17:20.347492 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.119) 0:00:46.806 ****** 2025-10-09 10:17:25.261777 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.261893 | orchestrator | 2025-10-09 10:17:25.261909 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-09 10:17:25.261962 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.126) 0:00:46.933 ****** 2025-10-09 10:17:25.261983 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.261994 | orchestrator | 2025-10-09 10:17:25.262005 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-09 10:17:25.262051 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.327) 0:00:47.261 ****** 2025-10-09 10:17:25.262062 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262073 | orchestrator | 2025-10-09 10:17:25.262082 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-09 10:17:25.262092 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.135) 0:00:47.396 ****** 2025-10-09 10:17:25.262101 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262111 | orchestrator | 2025-10-09 10:17:25.262121 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-09 10:17:25.262131 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.135) 0:00:47.531 ****** 2025-10-09 10:17:25.262140 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262150 | orchestrator | 2025-10-09 10:17:25.262160 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-09 10:17:25.262170 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.154) 0:00:47.685 ****** 2025-10-09 10:17:25.262179 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262189 | orchestrator | 2025-10-09 10:17:25.262199 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-09 10:17:25.262208 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.137) 0:00:47.823 ****** 2025-10-09 10:17:25.262218 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262228 | orchestrator | 2025-10-09 10:17:25.262237 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-09 10:17:25.262247 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.136) 0:00:47.960 ****** 2025-10-09 10:17:25.262257 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262267 | orchestrator | 2025-10-09 10:17:25.262276 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-09 10:17:25.262286 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.158) 0:00:48.119 ****** 2025-10-09 10:17:25.262296 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262306 | orchestrator | 2025-10-09 10:17:25.262349 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-09 10:17:25.262360 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.118) 0:00:48.237 ****** 2025-10-09 10:17:25.262371 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262382 | orchestrator | 2025-10-09 10:17:25.262393 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-09 10:17:25.262404 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.140) 0:00:48.378 ****** 2025-10-09 10:17:25.262429 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.262442 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.262453 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262465 | orchestrator | 2025-10-09 10:17:25.262476 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-09 10:17:25.262487 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.159) 0:00:48.537 ****** 2025-10-09 10:17:25.262498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.262509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.262529 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262540 | orchestrator | 2025-10-09 10:17:25.262551 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-09 10:17:25.262562 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.169) 0:00:48.707 ****** 2025-10-09 10:17:25.262573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.262584 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.262595 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262606 | orchestrator | 2025-10-09 10:17:25.262617 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-09 10:17:25.262628 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.192) 0:00:48.900 ****** 2025-10-09 10:17:25.262639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.262650 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.262661 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262672 | orchestrator | 2025-10-09 10:17:25.262682 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-09 10:17:25.262708 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.371) 0:00:49.272 ****** 2025-10-09 10:17:25.262719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.262729 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.262739 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262749 | orchestrator | 2025-10-09 10:17:25.262758 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-09 10:17:25.262768 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.165) 0:00:49.437 ****** 2025-10-09 10:17:25.262778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.262788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.262797 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262807 | orchestrator | 2025-10-09 10:17:25.262818 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-09 10:17:25.262828 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.157) 0:00:49.594 ****** 2025-10-09 10:17:25.262838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.262847 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.262857 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262867 | orchestrator | 2025-10-09 10:17:25.262877 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-09 10:17:25.262887 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.157) 0:00:49.752 ****** 2025-10-09 10:17:25.262897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.262906 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.262922 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.262932 | orchestrator | 2025-10-09 10:17:25.262942 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-09 10:17:25.262989 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.189) 0:00:49.942 ****** 2025-10-09 10:17:25.262999 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:25.263009 | orchestrator | 2025-10-09 10:17:25.263019 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-09 10:17:25.263029 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.508) 0:00:50.450 ****** 2025-10-09 10:17:25.263038 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:25.263048 | orchestrator | 2025-10-09 10:17:25.263058 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-09 10:17:25.263067 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.536) 0:00:50.987 ****** 2025-10-09 10:17:25.263077 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:25.263087 | orchestrator | 2025-10-09 10:17:25.263096 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-09 10:17:25.263106 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.175) 0:00:51.163 ****** 2025-10-09 10:17:25.263116 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'vg_name': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'}) 2025-10-09 10:17:25.263126 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'vg_name': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}) 2025-10-09 10:17:25.263136 | orchestrator | 2025-10-09 10:17:25.263146 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-09 10:17:25.263155 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.206) 0:00:51.369 ****** 2025-10-09 10:17:25.263165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.263175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.263184 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:25.263194 | orchestrator | 2025-10-09 10:17:25.263204 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-09 10:17:25.263213 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.186) 0:00:51.556 ****** 2025-10-09 10:17:25.263223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:25.263233 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:25.263249 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:31.779742 | orchestrator | 2025-10-09 10:17:31.779852 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-09 10:17:31.779869 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.168) 0:00:51.724 ****** 2025-10-09 10:17:31.779882 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'})  2025-10-09 10:17:31.779895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'})  2025-10-09 10:17:31.779906 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:31.779918 | orchestrator | 2025-10-09 10:17:31.779930 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-09 10:17:31.779941 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.175) 0:00:51.900 ****** 2025-10-09 10:17:31.779977 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:17:31.779989 | orchestrator |  "lvm_report": { 2025-10-09 10:17:31.780000 | orchestrator |  "lv": [ 2025-10-09 10:17:31.780012 | orchestrator |  { 2025-10-09 10:17:31.780023 | orchestrator |  "lv_name": "osd-block-dd550970-9f11-5339-ba56-252bcc7474b0", 2025-10-09 10:17:31.780035 | orchestrator |  "vg_name": "ceph-dd550970-9f11-5339-ba56-252bcc7474b0" 2025-10-09 10:17:31.780046 | orchestrator |  }, 2025-10-09 10:17:31.780057 | orchestrator |  { 2025-10-09 10:17:31.780067 | orchestrator |  "lv_name": "osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4", 2025-10-09 10:17:31.780078 | orchestrator |  "vg_name": "ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4" 2025-10-09 10:17:31.780089 | orchestrator |  } 2025-10-09 10:17:31.780100 | orchestrator |  ], 2025-10-09 10:17:31.780110 | orchestrator |  "pv": [ 2025-10-09 10:17:31.780121 | orchestrator |  { 2025-10-09 10:17:31.780132 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-09 10:17:31.780143 | orchestrator |  "vg_name": "ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4" 2025-10-09 10:17:31.780153 | orchestrator |  }, 2025-10-09 10:17:31.780164 | orchestrator |  { 2025-10-09 10:17:31.780175 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-09 10:17:31.780186 | orchestrator |  "vg_name": "ceph-dd550970-9f11-5339-ba56-252bcc7474b0" 2025-10-09 10:17:31.780196 | orchestrator |  } 2025-10-09 10:17:31.780207 | orchestrator |  ] 2025-10-09 10:17:31.780217 | orchestrator |  } 2025-10-09 10:17:31.780228 | orchestrator | } 2025-10-09 10:17:31.780240 | orchestrator | 2025-10-09 10:17:31.780251 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-09 10:17:31.780261 | orchestrator | 2025-10-09 10:17:31.780273 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:17:31.780286 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.550) 0:00:52.450 ****** 2025-10-09 10:17:31.780297 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-09 10:17:31.780346 | orchestrator | 2025-10-09 10:17:31.780374 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:17:31.780387 | orchestrator | Thursday 09 October 2025 10:17:26 +0000 (0:00:00.288) 0:00:52.739 ****** 2025-10-09 10:17:31.780398 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:31.780411 | orchestrator | 2025-10-09 10:17:31.780424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780436 | orchestrator | Thursday 09 October 2025 10:17:26 +0000 (0:00:00.257) 0:00:52.996 ****** 2025-10-09 10:17:31.780448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-10-09 10:17:31.780460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-10-09 10:17:31.780472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-10-09 10:17:31.780484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-10-09 10:17:31.780496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-10-09 10:17:31.780508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-10-09 10:17:31.780520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-10-09 10:17:31.780532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-10-09 10:17:31.780544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-10-09 10:17:31.780555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-10-09 10:17:31.780567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-10-09 10:17:31.780589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-10-09 10:17:31.780602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-10-09 10:17:31.780614 | orchestrator | 2025-10-09 10:17:31.780626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780637 | orchestrator | Thursday 09 October 2025 10:17:26 +0000 (0:00:00.444) 0:00:53.441 ****** 2025-10-09 10:17:31.780648 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:31.780659 | orchestrator | 2025-10-09 10:17:31.780674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780685 | orchestrator | Thursday 09 October 2025 10:17:27 +0000 (0:00:00.215) 0:00:53.657 ****** 2025-10-09 10:17:31.780696 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:31.780707 | orchestrator | 2025-10-09 10:17:31.780718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780746 | orchestrator | Thursday 09 October 2025 10:17:27 +0000 (0:00:00.220) 0:00:53.878 ****** 2025-10-09 10:17:31.780757 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:31.780768 | orchestrator | 2025-10-09 10:17:31.780779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780789 | orchestrator | Thursday 09 October 2025 10:17:27 +0000 (0:00:00.206) 0:00:54.084 ****** 2025-10-09 10:17:31.780800 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:31.780811 | orchestrator | 2025-10-09 10:17:31.780821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780832 | orchestrator | Thursday 09 October 2025 10:17:27 +0000 (0:00:00.214) 0:00:54.299 ****** 2025-10-09 10:17:31.780843 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:31.780853 | orchestrator | 2025-10-09 10:17:31.780864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780875 | orchestrator | Thursday 09 October 2025 10:17:28 +0000 (0:00:00.218) 0:00:54.517 ****** 2025-10-09 10:17:31.780886 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:31.780896 | orchestrator | 2025-10-09 10:17:31.780907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780918 | orchestrator | Thursday 09 October 2025 10:17:28 +0000 (0:00:00.631) 0:00:55.149 ****** 2025-10-09 10:17:31.780929 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:31.780940 | orchestrator | 2025-10-09 10:17:31.780950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.780961 | orchestrator | Thursday 09 October 2025 10:17:28 +0000 (0:00:00.227) 0:00:55.377 ****** 2025-10-09 10:17:31.780972 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:31.780983 | orchestrator | 2025-10-09 10:17:31.780993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.781004 | orchestrator | Thursday 09 October 2025 10:17:29 +0000 (0:00:00.209) 0:00:55.587 ****** 2025-10-09 10:17:31.781015 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633) 2025-10-09 10:17:31.781027 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633) 2025-10-09 10:17:31.781038 | orchestrator | 2025-10-09 10:17:31.781048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.781059 | orchestrator | Thursday 09 October 2025 10:17:29 +0000 (0:00:00.428) 0:00:56.015 ****** 2025-10-09 10:17:31.781070 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317) 2025-10-09 10:17:31.781081 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317) 2025-10-09 10:17:31.781091 | orchestrator | 2025-10-09 10:17:31.781102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.781113 | orchestrator | Thursday 09 October 2025 10:17:29 +0000 (0:00:00.446) 0:00:56.461 ****** 2025-10-09 10:17:31.781129 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5) 2025-10-09 10:17:31.781155 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5) 2025-10-09 10:17:31.781166 | orchestrator | 2025-10-09 10:17:31.781177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.781188 | orchestrator | Thursday 09 October 2025 10:17:30 +0000 (0:00:00.496) 0:00:56.958 ****** 2025-10-09 10:17:31.781198 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb) 2025-10-09 10:17:31.781209 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb) 2025-10-09 10:17:31.781220 | orchestrator | 2025-10-09 10:17:31.781230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:31.781241 | orchestrator | Thursday 09 October 2025 10:17:30 +0000 (0:00:00.474) 0:00:57.433 ****** 2025-10-09 10:17:31.781252 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:17:31.781262 | orchestrator | 2025-10-09 10:17:31.781273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:31.781284 | orchestrator | Thursday 09 October 2025 10:17:31 +0000 (0:00:00.341) 0:00:57.774 ****** 2025-10-09 10:17:31.781294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-10-09 10:17:31.781305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-10-09 10:17:31.781336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-10-09 10:17:31.781347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-10-09 10:17:31.781357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-10-09 10:17:31.781368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-10-09 10:17:31.781379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-10-09 10:17:31.781390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-10-09 10:17:31.781400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-10-09 10:17:31.781410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-10-09 10:17:31.781421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-10-09 10:17:31.781439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-10-09 10:17:41.293880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-10-09 10:17:41.293993 | orchestrator | 2025-10-09 10:17:41.294009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294080 | orchestrator | Thursday 09 October 2025 10:17:31 +0000 (0:00:00.456) 0:00:58.231 ****** 2025-10-09 10:17:41.294092 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294105 | orchestrator | 2025-10-09 10:17:41.294116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294127 | orchestrator | Thursday 09 October 2025 10:17:32 +0000 (0:00:00.239) 0:00:58.470 ****** 2025-10-09 10:17:41.294138 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294149 | orchestrator | 2025-10-09 10:17:41.294160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294172 | orchestrator | Thursday 09 October 2025 10:17:32 +0000 (0:00:00.745) 0:00:59.216 ****** 2025-10-09 10:17:41.294182 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294193 | orchestrator | 2025-10-09 10:17:41.294204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294215 | orchestrator | Thursday 09 October 2025 10:17:32 +0000 (0:00:00.216) 0:00:59.433 ****** 2025-10-09 10:17:41.294248 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294259 | orchestrator | 2025-10-09 10:17:41.294270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294280 | orchestrator | Thursday 09 October 2025 10:17:33 +0000 (0:00:00.236) 0:00:59.670 ****** 2025-10-09 10:17:41.294291 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294302 | orchestrator | 2025-10-09 10:17:41.294350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294361 | orchestrator | Thursday 09 October 2025 10:17:33 +0000 (0:00:00.235) 0:00:59.905 ****** 2025-10-09 10:17:41.294372 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294383 | orchestrator | 2025-10-09 10:17:41.294394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294405 | orchestrator | Thursday 09 October 2025 10:17:33 +0000 (0:00:00.222) 0:01:00.128 ****** 2025-10-09 10:17:41.294417 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294429 | orchestrator | 2025-10-09 10:17:41.294441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294454 | orchestrator | Thursday 09 October 2025 10:17:33 +0000 (0:00:00.222) 0:01:00.351 ****** 2025-10-09 10:17:41.294466 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294478 | orchestrator | 2025-10-09 10:17:41.294490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294502 | orchestrator | Thursday 09 October 2025 10:17:34 +0000 (0:00:00.244) 0:01:00.595 ****** 2025-10-09 10:17:41.294514 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-10-09 10:17:41.294527 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-10-09 10:17:41.294539 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-10-09 10:17:41.294552 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-10-09 10:17:41.294563 | orchestrator | 2025-10-09 10:17:41.294575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294587 | orchestrator | Thursday 09 October 2025 10:17:34 +0000 (0:00:00.738) 0:01:01.334 ****** 2025-10-09 10:17:41.294599 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294611 | orchestrator | 2025-10-09 10:17:41.294623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294635 | orchestrator | Thursday 09 October 2025 10:17:35 +0000 (0:00:00.229) 0:01:01.564 ****** 2025-10-09 10:17:41.294647 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294659 | orchestrator | 2025-10-09 10:17:41.294671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294684 | orchestrator | Thursday 09 October 2025 10:17:35 +0000 (0:00:00.198) 0:01:01.762 ****** 2025-10-09 10:17:41.294697 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294708 | orchestrator | 2025-10-09 10:17:41.294720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:41.294733 | orchestrator | Thursday 09 October 2025 10:17:35 +0000 (0:00:00.207) 0:01:01.969 ****** 2025-10-09 10:17:41.294745 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294757 | orchestrator | 2025-10-09 10:17:41.294769 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-09 10:17:41.294780 | orchestrator | Thursday 09 October 2025 10:17:35 +0000 (0:00:00.226) 0:01:02.195 ****** 2025-10-09 10:17:41.294791 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.294801 | orchestrator | 2025-10-09 10:17:41.294812 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-09 10:17:41.294823 | orchestrator | Thursday 09 October 2025 10:17:36 +0000 (0:00:00.384) 0:01:02.580 ****** 2025-10-09 10:17:41.294834 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '858e0050-0995-591a-8058-d3b9e3e91008'}}) 2025-10-09 10:17:41.294845 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a727a7b-5e51-5ff8-a4b5-d0215765e108'}}) 2025-10-09 10:17:41.294863 | orchestrator | 2025-10-09 10:17:41.294874 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-09 10:17:41.294885 | orchestrator | Thursday 09 October 2025 10:17:36 +0000 (0:00:00.212) 0:01:02.792 ****** 2025-10-09 10:17:41.294897 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'}) 2025-10-09 10:17:41.294909 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'}) 2025-10-09 10:17:41.294920 | orchestrator | 2025-10-09 10:17:41.294931 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-09 10:17:41.294959 | orchestrator | Thursday 09 October 2025 10:17:38 +0000 (0:00:01.833) 0:01:04.626 ****** 2025-10-09 10:17:41.294971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:41.294984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:41.294995 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.295006 | orchestrator | 2025-10-09 10:17:41.295016 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-09 10:17:41.295027 | orchestrator | Thursday 09 October 2025 10:17:38 +0000 (0:00:00.175) 0:01:04.801 ****** 2025-10-09 10:17:41.295038 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'}) 2025-10-09 10:17:41.295067 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'}) 2025-10-09 10:17:41.295079 | orchestrator | 2025-10-09 10:17:41.295090 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-09 10:17:41.295101 | orchestrator | Thursday 09 October 2025 10:17:39 +0000 (0:00:01.269) 0:01:06.071 ****** 2025-10-09 10:17:41.295112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:41.295123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:41.295134 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.295145 | orchestrator | 2025-10-09 10:17:41.295156 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-09 10:17:41.295166 | orchestrator | Thursday 09 October 2025 10:17:39 +0000 (0:00:00.157) 0:01:06.228 ****** 2025-10-09 10:17:41.295177 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.295188 | orchestrator | 2025-10-09 10:17:41.295198 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-09 10:17:41.295209 | orchestrator | Thursday 09 October 2025 10:17:39 +0000 (0:00:00.140) 0:01:06.369 ****** 2025-10-09 10:17:41.295220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:41.295235 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:41.295246 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.295257 | orchestrator | 2025-10-09 10:17:41.295268 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-09 10:17:41.295279 | orchestrator | Thursday 09 October 2025 10:17:40 +0000 (0:00:00.161) 0:01:06.531 ****** 2025-10-09 10:17:41.295289 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.295300 | orchestrator | 2025-10-09 10:17:41.295330 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-09 10:17:41.295349 | orchestrator | Thursday 09 October 2025 10:17:40 +0000 (0:00:00.152) 0:01:06.684 ****** 2025-10-09 10:17:41.295359 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:41.295370 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:41.295381 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.295392 | orchestrator | 2025-10-09 10:17:41.295403 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-09 10:17:41.295413 | orchestrator | Thursday 09 October 2025 10:17:40 +0000 (0:00:00.158) 0:01:06.842 ****** 2025-10-09 10:17:41.295424 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.295435 | orchestrator | 2025-10-09 10:17:41.295445 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-09 10:17:41.295456 | orchestrator | Thursday 09 October 2025 10:17:40 +0000 (0:00:00.147) 0:01:06.990 ****** 2025-10-09 10:17:41.295467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:41.295478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:41.295489 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:41.295499 | orchestrator | 2025-10-09 10:17:41.295510 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-09 10:17:41.295521 | orchestrator | Thursday 09 October 2025 10:17:40 +0000 (0:00:00.157) 0:01:07.147 ****** 2025-10-09 10:17:41.295531 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:41.295542 | orchestrator | 2025-10-09 10:17:41.295553 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-09 10:17:41.295564 | orchestrator | Thursday 09 October 2025 10:17:41 +0000 (0:00:00.417) 0:01:07.565 ****** 2025-10-09 10:17:41.295582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:47.754619 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:47.754735 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.754753 | orchestrator | 2025-10-09 10:17:47.754765 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-09 10:17:47.754778 | orchestrator | Thursday 09 October 2025 10:17:41 +0000 (0:00:00.191) 0:01:07.756 ****** 2025-10-09 10:17:47.754790 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:47.754801 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:47.754812 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.754823 | orchestrator | 2025-10-09 10:17:47.754835 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-09 10:17:47.754846 | orchestrator | Thursday 09 October 2025 10:17:41 +0000 (0:00:00.184) 0:01:07.941 ****** 2025-10-09 10:17:47.754858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:47.754869 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:47.754880 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.754891 | orchestrator | 2025-10-09 10:17:47.754927 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-09 10:17:47.754938 | orchestrator | Thursday 09 October 2025 10:17:41 +0000 (0:00:00.194) 0:01:08.135 ****** 2025-10-09 10:17:47.754949 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.754960 | orchestrator | 2025-10-09 10:17:47.754970 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-09 10:17:47.754981 | orchestrator | Thursday 09 October 2025 10:17:41 +0000 (0:00:00.154) 0:01:08.290 ****** 2025-10-09 10:17:47.754992 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755002 | orchestrator | 2025-10-09 10:17:47.755013 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-09 10:17:47.755023 | orchestrator | Thursday 09 October 2025 10:17:41 +0000 (0:00:00.150) 0:01:08.441 ****** 2025-10-09 10:17:47.755034 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755045 | orchestrator | 2025-10-09 10:17:47.755055 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-09 10:17:47.755081 | orchestrator | Thursday 09 October 2025 10:17:42 +0000 (0:00:00.151) 0:01:08.593 ****** 2025-10-09 10:17:47.755092 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:17:47.755104 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-09 10:17:47.755115 | orchestrator | } 2025-10-09 10:17:47.755126 | orchestrator | 2025-10-09 10:17:47.755136 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-09 10:17:47.755150 | orchestrator | Thursday 09 October 2025 10:17:42 +0000 (0:00:00.159) 0:01:08.752 ****** 2025-10-09 10:17:47.755162 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:17:47.755175 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-09 10:17:47.755186 | orchestrator | } 2025-10-09 10:17:47.755199 | orchestrator | 2025-10-09 10:17:47.755211 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-09 10:17:47.755223 | orchestrator | Thursday 09 October 2025 10:17:42 +0000 (0:00:00.151) 0:01:08.904 ****** 2025-10-09 10:17:47.755236 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:17:47.755248 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-09 10:17:47.755261 | orchestrator | } 2025-10-09 10:17:47.755273 | orchestrator | 2025-10-09 10:17:47.755285 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-09 10:17:47.755297 | orchestrator | Thursday 09 October 2025 10:17:42 +0000 (0:00:00.139) 0:01:09.044 ****** 2025-10-09 10:17:47.755342 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:47.755362 | orchestrator | 2025-10-09 10:17:47.755380 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-09 10:17:47.755398 | orchestrator | Thursday 09 October 2025 10:17:43 +0000 (0:00:00.507) 0:01:09.551 ****** 2025-10-09 10:17:47.755410 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:47.755422 | orchestrator | 2025-10-09 10:17:47.755434 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-09 10:17:47.755446 | orchestrator | Thursday 09 October 2025 10:17:43 +0000 (0:00:00.519) 0:01:10.071 ****** 2025-10-09 10:17:47.755458 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:47.755471 | orchestrator | 2025-10-09 10:17:47.755482 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-09 10:17:47.755495 | orchestrator | Thursday 09 October 2025 10:17:44 +0000 (0:00:00.757) 0:01:10.828 ****** 2025-10-09 10:17:47.755506 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:47.755517 | orchestrator | 2025-10-09 10:17:47.755528 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-09 10:17:47.755539 | orchestrator | Thursday 09 October 2025 10:17:44 +0000 (0:00:00.158) 0:01:10.986 ****** 2025-10-09 10:17:47.755549 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755560 | orchestrator | 2025-10-09 10:17:47.755570 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-09 10:17:47.755581 | orchestrator | Thursday 09 October 2025 10:17:44 +0000 (0:00:00.121) 0:01:11.107 ****** 2025-10-09 10:17:47.755591 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755614 | orchestrator | 2025-10-09 10:17:47.755625 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-09 10:17:47.755636 | orchestrator | Thursday 09 October 2025 10:17:44 +0000 (0:00:00.120) 0:01:11.228 ****** 2025-10-09 10:17:47.755646 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:17:47.755657 | orchestrator |  "vgs_report": { 2025-10-09 10:17:47.755668 | orchestrator |  "vg": [] 2025-10-09 10:17:47.755696 | orchestrator |  } 2025-10-09 10:17:47.755707 | orchestrator | } 2025-10-09 10:17:47.755718 | orchestrator | 2025-10-09 10:17:47.755729 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-09 10:17:47.755740 | orchestrator | Thursday 09 October 2025 10:17:44 +0000 (0:00:00.144) 0:01:11.372 ****** 2025-10-09 10:17:47.755750 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755761 | orchestrator | 2025-10-09 10:17:47.755772 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-09 10:17:47.755783 | orchestrator | Thursday 09 October 2025 10:17:45 +0000 (0:00:00.151) 0:01:11.524 ****** 2025-10-09 10:17:47.755794 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755804 | orchestrator | 2025-10-09 10:17:47.755815 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-09 10:17:47.755826 | orchestrator | Thursday 09 October 2025 10:17:45 +0000 (0:00:00.145) 0:01:11.669 ****** 2025-10-09 10:17:47.755836 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755847 | orchestrator | 2025-10-09 10:17:47.755857 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-09 10:17:47.755868 | orchestrator | Thursday 09 October 2025 10:17:45 +0000 (0:00:00.139) 0:01:11.809 ****** 2025-10-09 10:17:47.755879 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755889 | orchestrator | 2025-10-09 10:17:47.755900 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-09 10:17:47.755911 | orchestrator | Thursday 09 October 2025 10:17:45 +0000 (0:00:00.143) 0:01:11.952 ****** 2025-10-09 10:17:47.755922 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755932 | orchestrator | 2025-10-09 10:17:47.755943 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-09 10:17:47.755954 | orchestrator | Thursday 09 October 2025 10:17:45 +0000 (0:00:00.160) 0:01:12.113 ****** 2025-10-09 10:17:47.755964 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.755975 | orchestrator | 2025-10-09 10:17:47.755986 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-09 10:17:47.755996 | orchestrator | Thursday 09 October 2025 10:17:45 +0000 (0:00:00.145) 0:01:12.258 ****** 2025-10-09 10:17:47.756007 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756018 | orchestrator | 2025-10-09 10:17:47.756028 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-09 10:17:47.756039 | orchestrator | Thursday 09 October 2025 10:17:45 +0000 (0:00:00.149) 0:01:12.408 ****** 2025-10-09 10:17:47.756049 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756060 | orchestrator | 2025-10-09 10:17:47.756071 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-09 10:17:47.756082 | orchestrator | Thursday 09 October 2025 10:17:46 +0000 (0:00:00.378) 0:01:12.786 ****** 2025-10-09 10:17:47.756092 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756103 | orchestrator | 2025-10-09 10:17:47.756114 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-09 10:17:47.756125 | orchestrator | Thursday 09 October 2025 10:17:46 +0000 (0:00:00.160) 0:01:12.947 ****** 2025-10-09 10:17:47.756141 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756152 | orchestrator | 2025-10-09 10:17:47.756163 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-09 10:17:47.756174 | orchestrator | Thursday 09 October 2025 10:17:46 +0000 (0:00:00.151) 0:01:13.098 ****** 2025-10-09 10:17:47.756185 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756195 | orchestrator | 2025-10-09 10:17:47.756206 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-09 10:17:47.756223 | orchestrator | Thursday 09 October 2025 10:17:46 +0000 (0:00:00.166) 0:01:13.265 ****** 2025-10-09 10:17:47.756234 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756245 | orchestrator | 2025-10-09 10:17:47.756255 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-09 10:17:47.756266 | orchestrator | Thursday 09 October 2025 10:17:46 +0000 (0:00:00.156) 0:01:13.421 ****** 2025-10-09 10:17:47.756277 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756288 | orchestrator | 2025-10-09 10:17:47.756299 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-09 10:17:47.756344 | orchestrator | Thursday 09 October 2025 10:17:47 +0000 (0:00:00.151) 0:01:13.572 ****** 2025-10-09 10:17:47.756355 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756366 | orchestrator | 2025-10-09 10:17:47.756377 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-09 10:17:47.756388 | orchestrator | Thursday 09 October 2025 10:17:47 +0000 (0:00:00.150) 0:01:13.723 ****** 2025-10-09 10:17:47.756398 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:47.756410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:47.756421 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756431 | orchestrator | 2025-10-09 10:17:47.756442 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-09 10:17:47.756453 | orchestrator | Thursday 09 October 2025 10:17:47 +0000 (0:00:00.166) 0:01:13.889 ****** 2025-10-09 10:17:47.756464 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:47.756475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:47.756486 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:47.756497 | orchestrator | 2025-10-09 10:17:47.756507 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-09 10:17:47.756518 | orchestrator | Thursday 09 October 2025 10:17:47 +0000 (0:00:00.164) 0:01:14.054 ****** 2025-10-09 10:17:47.756537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.905121 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.905213 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.905228 | orchestrator | 2025-10-09 10:17:50.905240 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-09 10:17:50.905252 | orchestrator | Thursday 09 October 2025 10:17:47 +0000 (0:00:00.164) 0:01:14.219 ****** 2025-10-09 10:17:50.905264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.905275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.905286 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.905297 | orchestrator | 2025-10-09 10:17:50.905357 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-09 10:17:50.905368 | orchestrator | Thursday 09 October 2025 10:17:47 +0000 (0:00:00.168) 0:01:14.387 ****** 2025-10-09 10:17:50.905380 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.905414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.905425 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.905436 | orchestrator | 2025-10-09 10:17:50.905447 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-09 10:17:50.905458 | orchestrator | Thursday 09 October 2025 10:17:48 +0000 (0:00:00.176) 0:01:14.564 ****** 2025-10-09 10:17:50.905469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.905480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.905491 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.905501 | orchestrator | 2025-10-09 10:17:50.905512 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-09 10:17:50.905523 | orchestrator | Thursday 09 October 2025 10:17:48 +0000 (0:00:00.401) 0:01:14.966 ****** 2025-10-09 10:17:50.905534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.905545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.905556 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.905567 | orchestrator | 2025-10-09 10:17:50.905577 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-09 10:17:50.905588 | orchestrator | Thursday 09 October 2025 10:17:48 +0000 (0:00:00.180) 0:01:15.146 ****** 2025-10-09 10:17:50.905599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.905611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.905621 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.905632 | orchestrator | 2025-10-09 10:17:50.905643 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-09 10:17:50.905655 | orchestrator | Thursday 09 October 2025 10:17:48 +0000 (0:00:00.149) 0:01:15.296 ****** 2025-10-09 10:17:50.905667 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:50.905680 | orchestrator | 2025-10-09 10:17:50.905693 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-09 10:17:50.905705 | orchestrator | Thursday 09 October 2025 10:17:49 +0000 (0:00:00.499) 0:01:15.795 ****** 2025-10-09 10:17:50.905716 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:50.905728 | orchestrator | 2025-10-09 10:17:50.905741 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-09 10:17:50.905753 | orchestrator | Thursday 09 October 2025 10:17:49 +0000 (0:00:00.515) 0:01:16.311 ****** 2025-10-09 10:17:50.905764 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:50.905774 | orchestrator | 2025-10-09 10:17:50.905785 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-09 10:17:50.905796 | orchestrator | Thursday 09 October 2025 10:17:50 +0000 (0:00:00.161) 0:01:16.473 ****** 2025-10-09 10:17:50.905807 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'vg_name': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'}) 2025-10-09 10:17:50.905819 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'vg_name': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'}) 2025-10-09 10:17:50.905830 | orchestrator | 2025-10-09 10:17:50.905841 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-09 10:17:50.905859 | orchestrator | Thursday 09 October 2025 10:17:50 +0000 (0:00:00.171) 0:01:16.644 ****** 2025-10-09 10:17:50.905887 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.905899 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.905909 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.905920 | orchestrator | 2025-10-09 10:17:50.905931 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-09 10:17:50.905942 | orchestrator | Thursday 09 October 2025 10:17:50 +0000 (0:00:00.179) 0:01:16.824 ****** 2025-10-09 10:17:50.905952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.905963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.905974 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.905985 | orchestrator | 2025-10-09 10:17:50.905996 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-09 10:17:50.906006 | orchestrator | Thursday 09 October 2025 10:17:50 +0000 (0:00:00.190) 0:01:17.015 ****** 2025-10-09 10:17:50.906110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'})  2025-10-09 10:17:50.906145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'})  2025-10-09 10:17:50.906157 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:50.906168 | orchestrator | 2025-10-09 10:17:50.906179 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-09 10:17:50.906190 | orchestrator | Thursday 09 October 2025 10:17:50 +0000 (0:00:00.164) 0:01:17.179 ****** 2025-10-09 10:17:50.906201 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:17:50.906211 | orchestrator |  "lvm_report": { 2025-10-09 10:17:50.906222 | orchestrator |  "lv": [ 2025-10-09 10:17:50.906233 | orchestrator |  { 2025-10-09 10:17:50.906244 | orchestrator |  "lv_name": "osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108", 2025-10-09 10:17:50.906255 | orchestrator |  "vg_name": "ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108" 2025-10-09 10:17:50.906266 | orchestrator |  }, 2025-10-09 10:17:50.906282 | orchestrator |  { 2025-10-09 10:17:50.906293 | orchestrator |  "lv_name": "osd-block-858e0050-0995-591a-8058-d3b9e3e91008", 2025-10-09 10:17:50.906335 | orchestrator |  "vg_name": "ceph-858e0050-0995-591a-8058-d3b9e3e91008" 2025-10-09 10:17:50.906347 | orchestrator |  } 2025-10-09 10:17:50.906358 | orchestrator |  ], 2025-10-09 10:17:50.906369 | orchestrator |  "pv": [ 2025-10-09 10:17:50.906379 | orchestrator |  { 2025-10-09 10:17:50.906390 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-09 10:17:50.906401 | orchestrator |  "vg_name": "ceph-858e0050-0995-591a-8058-d3b9e3e91008" 2025-10-09 10:17:50.906411 | orchestrator |  }, 2025-10-09 10:17:50.906422 | orchestrator |  { 2025-10-09 10:17:50.906433 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-09 10:17:50.906443 | orchestrator |  "vg_name": "ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108" 2025-10-09 10:17:50.906454 | orchestrator |  } 2025-10-09 10:17:50.906465 | orchestrator |  ] 2025-10-09 10:17:50.906475 | orchestrator |  } 2025-10-09 10:17:50.906486 | orchestrator | } 2025-10-09 10:17:50.906497 | orchestrator | 2025-10-09 10:17:50.906508 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:17:50.906519 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-09 10:17:50.906538 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-09 10:17:50.906549 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-09 10:17:50.906560 | orchestrator | 2025-10-09 10:17:50.906571 | orchestrator | 2025-10-09 10:17:50.906581 | orchestrator | 2025-10-09 10:17:50.906592 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:17:50.906603 | orchestrator | Thursday 09 October 2025 10:17:50 +0000 (0:00:00.156) 0:01:17.336 ****** 2025-10-09 10:17:50.906614 | orchestrator | =============================================================================== 2025-10-09 10:17:50.906624 | orchestrator | Create block VGs -------------------------------------------------------- 5.72s 2025-10-09 10:17:50.906635 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2025-10-09 10:17:50.906646 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2025-10-09 10:17:50.906657 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.77s 2025-10-09 10:17:50.906667 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2025-10-09 10:17:50.906678 | orchestrator | Add known partitions to the list of available block devices ------------- 1.58s 2025-10-09 10:17:50.906689 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-10-09 10:17:50.906699 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.51s 2025-10-09 10:17:50.906718 | orchestrator | Add known links to the list of available block devices ------------------ 1.44s 2025-10-09 10:17:51.336803 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2025-10-09 10:17:51.336873 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-10-09 10:17:51.336886 | orchestrator | Print LVM report data --------------------------------------------------- 1.00s 2025-10-09 10:17:51.336897 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2025-10-09 10:17:51.336907 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2025-10-09 10:17:51.336918 | orchestrator | Get initial list of available block devices ----------------------------- 0.85s 2025-10-09 10:17:51.336929 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.76s 2025-10-09 10:17:51.336940 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-10-09 10:17:51.336950 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-10-09 10:17:51.336961 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.73s 2025-10-09 10:17:51.336972 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-10-09 10:18:03.837231 | orchestrator | 2025-10-09 10:18:03 | INFO  | Task 4a10623e-080a-4d64-888b-b23a34a53d46 (facts) was prepared for execution. 2025-10-09 10:18:03.837394 | orchestrator | 2025-10-09 10:18:03 | INFO  | It takes a moment until task 4a10623e-080a-4d64-888b-b23a34a53d46 (facts) has been started and output is visible here. 2025-10-09 10:18:16.532171 | orchestrator | 2025-10-09 10:18:16.532269 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-09 10:18:16.532281 | orchestrator | 2025-10-09 10:18:16.532289 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-09 10:18:16.532297 | orchestrator | Thursday 09 October 2025 10:18:08 +0000 (0:00:00.313) 0:00:00.313 ****** 2025-10-09 10:18:16.532340 | orchestrator | ok: [testbed-manager] 2025-10-09 10:18:16.532349 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:18:16.532356 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:18:16.532382 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:18:16.532389 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:16.532395 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:18:16.532402 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:18:16.532409 | orchestrator | 2025-10-09 10:18:16.532416 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-09 10:18:16.532422 | orchestrator | Thursday 09 October 2025 10:18:09 +0000 (0:00:01.132) 0:00:01.446 ****** 2025-10-09 10:18:16.532429 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:18:16.532449 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:18:16.532456 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:18:16.532463 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:18:16.532470 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:16.532476 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:16.532483 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:18:16.532490 | orchestrator | 2025-10-09 10:18:16.532496 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:18:16.532503 | orchestrator | 2025-10-09 10:18:16.532510 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:18:16.532516 | orchestrator | Thursday 09 October 2025 10:18:10 +0000 (0:00:01.341) 0:00:02.788 ****** 2025-10-09 10:18:16.532523 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:18:16.532529 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:18:16.532536 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:18:16.532543 | orchestrator | ok: [testbed-manager] 2025-10-09 10:18:16.532549 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:16.532556 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:18:16.532562 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:18:16.532569 | orchestrator | 2025-10-09 10:18:16.532576 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-09 10:18:16.532582 | orchestrator | 2025-10-09 10:18:16.532589 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-09 10:18:16.532596 | orchestrator | Thursday 09 October 2025 10:18:15 +0000 (0:00:04.956) 0:00:07.744 ****** 2025-10-09 10:18:16.532602 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:18:16.532609 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:18:16.532616 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:18:16.532622 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:18:16.532629 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:16.532635 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:16.532642 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:18:16.532649 | orchestrator | 2025-10-09 10:18:16.532655 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:18:16.532662 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:18:16.532671 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:18:16.532678 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:18:16.532685 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:18:16.532692 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:18:16.532698 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:18:16.532705 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:18:16.532711 | orchestrator | 2025-10-09 10:18:16.532718 | orchestrator | 2025-10-09 10:18:16.532732 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:18:16.532740 | orchestrator | Thursday 09 October 2025 10:18:16 +0000 (0:00:00.606) 0:00:08.351 ****** 2025-10-09 10:18:16.532748 | orchestrator | =============================================================================== 2025-10-09 10:18:16.532755 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.96s 2025-10-09 10:18:16.532763 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2025-10-09 10:18:16.532770 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-10-09 10:18:16.532778 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2025-10-09 10:18:29.019484 | orchestrator | 2025-10-09 10:18:29 | INFO  | Task 9cd3af71-5087-4a23-9f5f-ad81968bd985 (frr) was prepared for execution. 2025-10-09 10:18:29.019603 | orchestrator | 2025-10-09 10:18:29 | INFO  | It takes a moment until task 9cd3af71-5087-4a23-9f5f-ad81968bd985 (frr) has been started and output is visible here. 2025-10-09 10:18:56.724246 | orchestrator | 2025-10-09 10:18:56.724378 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-10-09 10:18:56.724390 | orchestrator | 2025-10-09 10:18:56.724399 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-10-09 10:18:56.724408 | orchestrator | Thursday 09 October 2025 10:18:32 +0000 (0:00:00.261) 0:00:00.261 ****** 2025-10-09 10:18:56.724417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 10:18:56.724427 | orchestrator | 2025-10-09 10:18:56.724435 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-10-09 10:18:56.724442 | orchestrator | Thursday 09 October 2025 10:18:33 +0000 (0:00:00.270) 0:00:00.532 ****** 2025-10-09 10:18:56.724451 | orchestrator | changed: [testbed-manager] 2025-10-09 10:18:56.724460 | orchestrator | 2025-10-09 10:18:56.724468 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-10-09 10:18:56.724475 | orchestrator | Thursday 09 October 2025 10:18:34 +0000 (0:00:01.204) 0:00:01.736 ****** 2025-10-09 10:18:56.724483 | orchestrator | changed: [testbed-manager] 2025-10-09 10:18:56.724491 | orchestrator | 2025-10-09 10:18:56.724499 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-10-09 10:18:56.724521 | orchestrator | Thursday 09 October 2025 10:18:44 +0000 (0:00:10.572) 0:00:12.308 ****** 2025-10-09 10:18:56.724530 | orchestrator | ok: [testbed-manager] 2025-10-09 10:18:56.724539 | orchestrator | 2025-10-09 10:18:56.724547 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-10-09 10:18:56.724554 | orchestrator | Thursday 09 October 2025 10:18:46 +0000 (0:00:01.397) 0:00:13.705 ****** 2025-10-09 10:18:56.724562 | orchestrator | changed: [testbed-manager] 2025-10-09 10:18:56.724570 | orchestrator | 2025-10-09 10:18:56.724578 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-10-09 10:18:56.724585 | orchestrator | Thursday 09 October 2025 10:18:47 +0000 (0:00:01.095) 0:00:14.801 ****** 2025-10-09 10:18:56.724593 | orchestrator | ok: [testbed-manager] 2025-10-09 10:18:56.724601 | orchestrator | 2025-10-09 10:18:56.724609 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-10-09 10:18:56.724618 | orchestrator | Thursday 09 October 2025 10:18:48 +0000 (0:00:01.351) 0:00:16.153 ****** 2025-10-09 10:18:56.724626 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:18:56.724634 | orchestrator | 2025-10-09 10:18:56.724641 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-10-09 10:18:56.724649 | orchestrator | Thursday 09 October 2025 10:18:49 +0000 (0:00:00.875) 0:00:17.028 ****** 2025-10-09 10:18:56.724657 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:18:56.724665 | orchestrator | 2025-10-09 10:18:56.724673 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-10-09 10:18:56.724701 | orchestrator | Thursday 09 October 2025 10:18:49 +0000 (0:00:00.184) 0:00:17.212 ****** 2025-10-09 10:18:56.724709 | orchestrator | changed: [testbed-manager] 2025-10-09 10:18:56.724717 | orchestrator | 2025-10-09 10:18:56.724725 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-10-09 10:18:56.724733 | orchestrator | Thursday 09 October 2025 10:18:50 +0000 (0:00:01.040) 0:00:18.252 ****** 2025-10-09 10:18:56.724740 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-10-09 10:18:56.724748 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-10-09 10:18:56.724756 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-10-09 10:18:56.724764 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-10-09 10:18:56.724772 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-10-09 10:18:56.724780 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-10-09 10:18:56.724788 | orchestrator | 2025-10-09 10:18:56.724795 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-10-09 10:18:56.724804 | orchestrator | Thursday 09 October 2025 10:18:53 +0000 (0:00:02.380) 0:00:20.633 ****** 2025-10-09 10:18:56.724813 | orchestrator | ok: [testbed-manager] 2025-10-09 10:18:56.724822 | orchestrator | 2025-10-09 10:18:56.724830 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-10-09 10:18:56.724839 | orchestrator | Thursday 09 October 2025 10:18:54 +0000 (0:00:01.748) 0:00:22.382 ****** 2025-10-09 10:18:56.724848 | orchestrator | changed: [testbed-manager] 2025-10-09 10:18:56.724856 | orchestrator | 2025-10-09 10:18:56.724865 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:18:56.724875 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:18:56.724884 | orchestrator | 2025-10-09 10:18:56.724892 | orchestrator | 2025-10-09 10:18:56.724902 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:18:56.724910 | orchestrator | Thursday 09 October 2025 10:18:56 +0000 (0:00:01.452) 0:00:23.834 ****** 2025-10-09 10:18:56.724919 | orchestrator | =============================================================================== 2025-10-09 10:18:56.724927 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.57s 2025-10-09 10:18:56.724936 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.38s 2025-10-09 10:18:56.724945 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.75s 2025-10-09 10:18:56.724954 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.45s 2025-10-09 10:18:56.724977 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.40s 2025-10-09 10:18:56.724986 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.35s 2025-10-09 10:18:56.724995 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.20s 2025-10-09 10:18:56.725004 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.10s 2025-10-09 10:18:56.725012 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.04s 2025-10-09 10:18:56.725022 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.88s 2025-10-09 10:18:56.725031 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.27s 2025-10-09 10:18:56.725039 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.18s 2025-10-09 10:18:57.054432 | orchestrator | 2025-10-09 10:18:57.056884 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Oct 9 10:18:57 UTC 2025 2025-10-09 10:18:57.056902 | orchestrator | 2025-10-09 10:18:59.135274 | orchestrator | 2025-10-09 10:18:59 | INFO  | Collection nutshell is prepared for execution 2025-10-09 10:18:59.135433 | orchestrator | 2025-10-09 10:18:59 | INFO  | D [0] - dotfiles 2025-10-09 10:19:09.168074 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [0] - homer 2025-10-09 10:19:09.168182 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [0] - netdata 2025-10-09 10:19:09.168199 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [0] - openstackclient 2025-10-09 10:19:09.168223 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [0] - phpmyadmin 2025-10-09 10:19:09.168638 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [0] - common 2025-10-09 10:19:09.173449 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [1] -- loadbalancer 2025-10-09 10:19:09.173881 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [2] --- opensearch 2025-10-09 10:19:09.175227 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [2] --- mariadb-ng 2025-10-09 10:19:09.175254 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [3] ---- horizon 2025-10-09 10:19:09.175267 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [3] ---- keystone 2025-10-09 10:19:09.175279 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [4] ----- neutron 2025-10-09 10:19:09.175577 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [5] ------ wait-for-nova 2025-10-09 10:19:09.175720 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [5] ------ octavia 2025-10-09 10:19:09.177376 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [4] ----- barbican 2025-10-09 10:19:09.177631 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [4] ----- designate 2025-10-09 10:19:09.177740 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [4] ----- ironic 2025-10-09 10:19:09.178492 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [4] ----- placement 2025-10-09 10:19:09.178714 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [4] ----- magnum 2025-10-09 10:19:09.179597 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [1] -- openvswitch 2025-10-09 10:19:09.179618 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [2] --- ovn 2025-10-09 10:19:09.179921 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [1] -- memcached 2025-10-09 10:19:09.180056 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [1] -- redis 2025-10-09 10:19:09.180487 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [1] -- rabbitmq-ng 2025-10-09 10:19:09.180950 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [0] - kubernetes 2025-10-09 10:19:09.184094 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [1] -- kubeconfig 2025-10-09 10:19:09.184128 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [1] -- copy-kubeconfig 2025-10-09 10:19:09.184140 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [0] - ceph 2025-10-09 10:19:09.186656 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [1] -- ceph-pools 2025-10-09 10:19:09.186801 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [2] --- copy-ceph-keys 2025-10-09 10:19:09.186820 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [3] ---- cephclient 2025-10-09 10:19:09.186832 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-10-09 10:19:09.187119 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [4] ----- wait-for-keystone 2025-10-09 10:19:09.187140 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [5] ------ kolla-ceph-rgw 2025-10-09 10:19:09.187324 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [5] ------ glance 2025-10-09 10:19:09.187345 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [5] ------ cinder 2025-10-09 10:19:09.187578 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [5] ------ nova 2025-10-09 10:19:09.187859 | orchestrator | 2025-10-09 10:19:09 | INFO  | A [4] ----- prometheus 2025-10-09 10:19:09.188132 | orchestrator | 2025-10-09 10:19:09 | INFO  | D [5] ------ grafana 2025-10-09 10:19:09.406920 | orchestrator | 2025-10-09 10:19:09 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-10-09 10:19:09.406974 | orchestrator | 2025-10-09 10:19:09 | INFO  | Tasks are running in the background 2025-10-09 10:19:12.600496 | orchestrator | 2025-10-09 10:19:12 | INFO  | No task IDs specified, wait for all currently running tasks 2025-10-09 10:19:14.751486 | orchestrator | 2025-10-09 10:19:14 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:14.751586 | orchestrator | 2025-10-09 10:19:14 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:14.751601 | orchestrator | 2025-10-09 10:19:14 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:14.752190 | orchestrator | 2025-10-09 10:19:14 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:14.753138 | orchestrator | 2025-10-09 10:19:14 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:14.754073 | orchestrator | 2025-10-09 10:19:14 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:14.754952 | orchestrator | 2025-10-09 10:19:14 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:14.754994 | orchestrator | 2025-10-09 10:19:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:17.822539 | orchestrator | 2025-10-09 10:19:17 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:17.829476 | orchestrator | 2025-10-09 10:19:17 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:17.831379 | orchestrator | 2025-10-09 10:19:17 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:17.832246 | orchestrator | 2025-10-09 10:19:17 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:17.833360 | orchestrator | 2025-10-09 10:19:17 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:17.834285 | orchestrator | 2025-10-09 10:19:17 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:17.835829 | orchestrator | 2025-10-09 10:19:17 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:17.835853 | orchestrator | 2025-10-09 10:19:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:20.917229 | orchestrator | 2025-10-09 10:19:20 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:20.917764 | orchestrator | 2025-10-09 10:19:20 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:20.924048 | orchestrator | 2025-10-09 10:19:20 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:20.924662 | orchestrator | 2025-10-09 10:19:20 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:20.925648 | orchestrator | 2025-10-09 10:19:20 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:20.929426 | orchestrator | 2025-10-09 10:19:20 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:20.930325 | orchestrator | 2025-10-09 10:19:20 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:20.930522 | orchestrator | 2025-10-09 10:19:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:23.978074 | orchestrator | 2025-10-09 10:19:23 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:23.980704 | orchestrator | 2025-10-09 10:19:23 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:23.982763 | orchestrator | 2025-10-09 10:19:23 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:23.983884 | orchestrator | 2025-10-09 10:19:23 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:23.986949 | orchestrator | 2025-10-09 10:19:23 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:23.988121 | orchestrator | 2025-10-09 10:19:23 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:23.989920 | orchestrator | 2025-10-09 10:19:23 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:23.989943 | orchestrator | 2025-10-09 10:19:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:27.142190 | orchestrator | 2025-10-09 10:19:27 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:27.142341 | orchestrator | 2025-10-09 10:19:27 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:27.142368 | orchestrator | 2025-10-09 10:19:27 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:27.142387 | orchestrator | 2025-10-09 10:19:27 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:27.142408 | orchestrator | 2025-10-09 10:19:27 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:27.142420 | orchestrator | 2025-10-09 10:19:27 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:27.142431 | orchestrator | 2025-10-09 10:19:27 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:27.142443 | orchestrator | 2025-10-09 10:19:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:30.199596 | orchestrator | 2025-10-09 10:19:30 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:30.199724 | orchestrator | 2025-10-09 10:19:30 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:30.199740 | orchestrator | 2025-10-09 10:19:30 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:30.199752 | orchestrator | 2025-10-09 10:19:30 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:30.199763 | orchestrator | 2025-10-09 10:19:30 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:30.212255 | orchestrator | 2025-10-09 10:19:30 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:30.212284 | orchestrator | 2025-10-09 10:19:30 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:30.212325 | orchestrator | 2025-10-09 10:19:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:33.325341 | orchestrator | 2025-10-09 10:19:33 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:33.325533 | orchestrator | 2025-10-09 10:19:33 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:33.325552 | orchestrator | 2025-10-09 10:19:33 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:33.325575 | orchestrator | 2025-10-09 10:19:33 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:33.330922 | orchestrator | 2025-10-09 10:19:33 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:33.330946 | orchestrator | 2025-10-09 10:19:33 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:33.332772 | orchestrator | 2025-10-09 10:19:33 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:33.333918 | orchestrator | 2025-10-09 10:19:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:36.497791 | orchestrator | 2025-10-09 10:19:36 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:36.498849 | orchestrator | 2025-10-09 10:19:36 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:36.500149 | orchestrator | 2025-10-09 10:19:36 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:36.500818 | orchestrator | 2025-10-09 10:19:36 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:36.501596 | orchestrator | 2025-10-09 10:19:36 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:36.502344 | orchestrator | 2025-10-09 10:19:36 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:36.503254 | orchestrator | 2025-10-09 10:19:36 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:36.503281 | orchestrator | 2025-10-09 10:19:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:39.572199 | orchestrator | 2025-10-09 10:19:39 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:39.572861 | orchestrator | 2025-10-09 10:19:39 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:39.573956 | orchestrator | 2025-10-09 10:19:39 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:39.575848 | orchestrator | 2025-10-09 10:19:39 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:39.577637 | orchestrator | 2025-10-09 10:19:39 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:39.579357 | orchestrator | 2025-10-09 10:19:39 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:39.580040 | orchestrator | 2025-10-09 10:19:39 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:39.580066 | orchestrator | 2025-10-09 10:19:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:42.791259 | orchestrator | 2025-10-09 10:19:42 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:42.791837 | orchestrator | 2025-10-09 10:19:42 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state STARTED 2025-10-09 10:19:42.792749 | orchestrator | 2025-10-09 10:19:42 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:42.793589 | orchestrator | 2025-10-09 10:19:42 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:42.794472 | orchestrator | 2025-10-09 10:19:42 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:42.796221 | orchestrator | 2025-10-09 10:19:42 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:42.796374 | orchestrator | 2025-10-09 10:19:42 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:42.796394 | orchestrator | 2025-10-09 10:19:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:46.035080 | orchestrator | 2025-10-09 10:19:46 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:19:46.041459 | orchestrator | 2025-10-09 10:19:46 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:46.045178 | orchestrator | 2025-10-09 10:19:46.045210 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-10-09 10:19:46.045221 | orchestrator | 2025-10-09 10:19:46.045232 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-10-09 10:19:46.045242 | orchestrator | Thursday 09 October 2025 10:19:26 +0000 (0:00:01.760) 0:00:01.760 ****** 2025-10-09 10:19:46.045252 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:19:46.045263 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:19:46.045273 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:19:46.045311 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:19:46.045321 | orchestrator | changed: [testbed-manager] 2025-10-09 10:19:46.045331 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:19:46.045341 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:19:46.045351 | orchestrator | 2025-10-09 10:19:46.045361 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-10-09 10:19:46.045371 | orchestrator | Thursday 09 October 2025 10:19:30 +0000 (0:00:04.430) 0:00:06.190 ****** 2025-10-09 10:19:46.045382 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-10-09 10:19:46.045392 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-10-09 10:19:46.045402 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-10-09 10:19:46.045411 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-10-09 10:19:46.045421 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-10-09 10:19:46.045431 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-10-09 10:19:46.045441 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-10-09 10:19:46.045451 | orchestrator | 2025-10-09 10:19:46.045460 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-10-09 10:19:46.045470 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:04.205) 0:00:10.397 ****** 2025-10-09 10:19:46.045484 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:19:31.658984', 'end': '2025-10-09 10:19:31.664997', 'delta': '0:00:00.006013', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:19:46.045506 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:19:31.621523', 'end': '2025-10-09 10:19:31.628989', 'delta': '0:00:00.007466', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:19:46.045522 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:19:31.927995', 'end': '2025-10-09 10:19:31.934982', 'delta': '0:00:00.006987', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:19:46.045576 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:19:32.356771', 'end': '2025-10-09 10:19:32.361359', 'delta': '0:00:00.004588', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:19:46.045588 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:19:33.046891', 'end': '2025-10-09 10:19:33.054417', 'delta': '0:00:00.007526', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:19:46.045599 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:19:33.676697', 'end': '2025-10-09 10:19:33.685107', 'delta': '0:00:00.008410', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:19:46.045609 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:19:34.279851', 'end': '2025-10-09 10:19:34.288398', 'delta': '0:00:00.008547', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:19:46.045627 | orchestrator | 2025-10-09 10:19:46.045637 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-10-09 10:19:46.045647 | orchestrator | Thursday 09 October 2025 10:19:38 +0000 (0:00:03.733) 0:00:14.131 ****** 2025-10-09 10:19:46.045657 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-10-09 10:19:46.045666 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-10-09 10:19:46.045676 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-10-09 10:19:46.045686 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-10-09 10:19:46.045695 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-10-09 10:19:46.045705 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-10-09 10:19:46.045714 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-10-09 10:19:46.045724 | orchestrator | 2025-10-09 10:19:46.045738 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-10-09 10:19:46.045748 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:01.555) 0:00:15.686 ****** 2025-10-09 10:19:46.045758 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-10-09 10:19:46.045768 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-10-09 10:19:46.045777 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-10-09 10:19:46.045787 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-10-09 10:19:46.045798 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-10-09 10:19:46.045811 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-10-09 10:19:46.045822 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-10-09 10:19:46.045833 | orchestrator | 2025-10-09 10:19:46.045844 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:19:46.045863 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:19:46.045878 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:19:46.045889 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:19:46.045901 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:19:46.045912 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:19:46.045923 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:19:46.045934 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:19:46.045945 | orchestrator | 2025-10-09 10:19:46.045956 | orchestrator | 2025-10-09 10:19:46.045967 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:19:46.045978 | orchestrator | Thursday 09 October 2025 10:19:43 +0000 (0:00:03.679) 0:00:19.366 ****** 2025-10-09 10:19:46.045988 | orchestrator | =============================================================================== 2025-10-09 10:19:46.045999 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.43s 2025-10-09 10:19:46.046010 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 4.21s 2025-10-09 10:19:46.046050 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.73s 2025-10-09 10:19:46.046061 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.68s 2025-10-09 10:19:46.046072 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.56s 2025-10-09 10:19:46.046084 | orchestrator | 2025-10-09 10:19:46 | INFO  | Task aa697255-068c-419a-bb5d-da20f9eb0d12 is in state SUCCESS 2025-10-09 10:19:46.055403 | orchestrator | 2025-10-09 10:19:46 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:46.055425 | orchestrator | 2025-10-09 10:19:46 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:46.057539 | orchestrator | 2025-10-09 10:19:46 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:46.062839 | orchestrator | 2025-10-09 10:19:46 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:46.064831 | orchestrator | 2025-10-09 10:19:46 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:46.064849 | orchestrator | 2025-10-09 10:19:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:49.174128 | orchestrator | 2025-10-09 10:19:49 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:19:49.174240 | orchestrator | 2025-10-09 10:19:49 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:49.174256 | orchestrator | 2025-10-09 10:19:49 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:49.174268 | orchestrator | 2025-10-09 10:19:49 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:49.174279 | orchestrator | 2025-10-09 10:19:49 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:49.174350 | orchestrator | 2025-10-09 10:19:49 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:49.174362 | orchestrator | 2025-10-09 10:19:49 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:49.174374 | orchestrator | 2025-10-09 10:19:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:52.287633 | orchestrator | 2025-10-09 10:19:52 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:19:52.288088 | orchestrator | 2025-10-09 10:19:52 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:52.293493 | orchestrator | 2025-10-09 10:19:52 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:52.293706 | orchestrator | 2025-10-09 10:19:52 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:52.295852 | orchestrator | 2025-10-09 10:19:52 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:52.296647 | orchestrator | 2025-10-09 10:19:52 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:52.296670 | orchestrator | 2025-10-09 10:19:52 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:52.296684 | orchestrator | 2025-10-09 10:19:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:55.378777 | orchestrator | 2025-10-09 10:19:55 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:19:55.381909 | orchestrator | 2025-10-09 10:19:55 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:55.382637 | orchestrator | 2025-10-09 10:19:55 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:55.384148 | orchestrator | 2025-10-09 10:19:55 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:55.387579 | orchestrator | 2025-10-09 10:19:55 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:55.388521 | orchestrator | 2025-10-09 10:19:55 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:55.389346 | orchestrator | 2025-10-09 10:19:55 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:55.391968 | orchestrator | 2025-10-09 10:19:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:19:58.450385 | orchestrator | 2025-10-09 10:19:58 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:19:58.452476 | orchestrator | 2025-10-09 10:19:58 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:19:58.453730 | orchestrator | 2025-10-09 10:19:58 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:19:58.455977 | orchestrator | 2025-10-09 10:19:58 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:19:58.458935 | orchestrator | 2025-10-09 10:19:58 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:19:58.460122 | orchestrator | 2025-10-09 10:19:58 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:19:58.461454 | orchestrator | 2025-10-09 10:19:58 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:19:58.461924 | orchestrator | 2025-10-09 10:19:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:01.855454 | orchestrator | 2025-10-09 10:20:01 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:01.855556 | orchestrator | 2025-10-09 10:20:01 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:01.855570 | orchestrator | 2025-10-09 10:20:01 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:01.855885 | orchestrator | 2025-10-09 10:20:01 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:01.855899 | orchestrator | 2025-10-09 10:20:01 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:01.855909 | orchestrator | 2025-10-09 10:20:01 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:01.855918 | orchestrator | 2025-10-09 10:20:01 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:20:01.855929 | orchestrator | 2025-10-09 10:20:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:04.922917 | orchestrator | 2025-10-09 10:20:04 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:04.923011 | orchestrator | 2025-10-09 10:20:04 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:04.923027 | orchestrator | 2025-10-09 10:20:04 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:04.923039 | orchestrator | 2025-10-09 10:20:04 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:04.923051 | orchestrator | 2025-10-09 10:20:04 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:04.923062 | orchestrator | 2025-10-09 10:20:04 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:04.923074 | orchestrator | 2025-10-09 10:20:04 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:20:04.923085 | orchestrator | 2025-10-09 10:20:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:08.170495 | orchestrator | 2025-10-09 10:20:08 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:08.171718 | orchestrator | 2025-10-09 10:20:08 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:08.175364 | orchestrator | 2025-10-09 10:20:08 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:08.178830 | orchestrator | 2025-10-09 10:20:08 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:08.199960 | orchestrator | 2025-10-09 10:20:08 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:08.200010 | orchestrator | 2025-10-09 10:20:08 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:08.200023 | orchestrator | 2025-10-09 10:20:08 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state STARTED 2025-10-09 10:20:08.200034 | orchestrator | 2025-10-09 10:20:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:11.276321 | orchestrator | 2025-10-09 10:20:11 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:11.278778 | orchestrator | 2025-10-09 10:20:11 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:11.279605 | orchestrator | 2025-10-09 10:20:11 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:11.282257 | orchestrator | 2025-10-09 10:20:11 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:11.285050 | orchestrator | 2025-10-09 10:20:11 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:11.285876 | orchestrator | 2025-10-09 10:20:11 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:11.286878 | orchestrator | 2025-10-09 10:20:11 | INFO  | Task 0a323243-344d-48fd-8805-d916a598c3d2 is in state SUCCESS 2025-10-09 10:20:11.289575 | orchestrator | 2025-10-09 10:20:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:14.511434 | orchestrator | 2025-10-09 10:20:14 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:14.511529 | orchestrator | 2025-10-09 10:20:14 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:14.511544 | orchestrator | 2025-10-09 10:20:14 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:14.511556 | orchestrator | 2025-10-09 10:20:14 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:14.511567 | orchestrator | 2025-10-09 10:20:14 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:14.511579 | orchestrator | 2025-10-09 10:20:14 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:14.511590 | orchestrator | 2025-10-09 10:20:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:17.579210 | orchestrator | 2025-10-09 10:20:17 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:17.583308 | orchestrator | 2025-10-09 10:20:17 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:17.586867 | orchestrator | 2025-10-09 10:20:17 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:17.587542 | orchestrator | 2025-10-09 10:20:17 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:17.592324 | orchestrator | 2025-10-09 10:20:17 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:17.593395 | orchestrator | 2025-10-09 10:20:17 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:17.593432 | orchestrator | 2025-10-09 10:20:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:20.972406 | orchestrator | 2025-10-09 10:20:20 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:20.972514 | orchestrator | 2025-10-09 10:20:20 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:20.972528 | orchestrator | 2025-10-09 10:20:20 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:20.972539 | orchestrator | 2025-10-09 10:20:20 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:20.972550 | orchestrator | 2025-10-09 10:20:20 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:20.972561 | orchestrator | 2025-10-09 10:20:20 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:20.972572 | orchestrator | 2025-10-09 10:20:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:23.823830 | orchestrator | 2025-10-09 10:20:23 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:23.824187 | orchestrator | 2025-10-09 10:20:23 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:23.905077 | orchestrator | 2025-10-09 10:20:23 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:23.905174 | orchestrator | 2025-10-09 10:20:23 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:23.905187 | orchestrator | 2025-10-09 10:20:23 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:23.905198 | orchestrator | 2025-10-09 10:20:23 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:23.905211 | orchestrator | 2025-10-09 10:20:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:26.881566 | orchestrator | 2025-10-09 10:20:26 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:26.881668 | orchestrator | 2025-10-09 10:20:26 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:26.881683 | orchestrator | 2025-10-09 10:20:26 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:26.881695 | orchestrator | 2025-10-09 10:20:26 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:26.881705 | orchestrator | 2025-10-09 10:20:26 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:26.881717 | orchestrator | 2025-10-09 10:20:26 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:26.881728 | orchestrator | 2025-10-09 10:20:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:30.007358 | orchestrator | 2025-10-09 10:20:30 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:30.021221 | orchestrator | 2025-10-09 10:20:30 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state STARTED 2025-10-09 10:20:30.033591 | orchestrator | 2025-10-09 10:20:30 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:30.041546 | orchestrator | 2025-10-09 10:20:30 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:30.043998 | orchestrator | 2025-10-09 10:20:30 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:30.048815 | orchestrator | 2025-10-09 10:20:30 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:30.048841 | orchestrator | 2025-10-09 10:20:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:33.119104 | orchestrator | 2025-10-09 10:20:33 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:33.119360 | orchestrator | 2025-10-09 10:20:33 | INFO  | Task ae0a3e6f-ead2-47c0-a9f3-6bb43d7e11f9 is in state SUCCESS 2025-10-09 10:20:33.120962 | orchestrator | 2025-10-09 10:20:33 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:33.121726 | orchestrator | 2025-10-09 10:20:33 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:33.123172 | orchestrator | 2025-10-09 10:20:33 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:33.125024 | orchestrator | 2025-10-09 10:20:33 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:33.125047 | orchestrator | 2025-10-09 10:20:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:36.247209 | orchestrator | 2025-10-09 10:20:36 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:36.247862 | orchestrator | 2025-10-09 10:20:36 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:36.250179 | orchestrator | 2025-10-09 10:20:36 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:36.251693 | orchestrator | 2025-10-09 10:20:36 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:36.254889 | orchestrator | 2025-10-09 10:20:36 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:36.254913 | orchestrator | 2025-10-09 10:20:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:39.317356 | orchestrator | 2025-10-09 10:20:39 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:39.317484 | orchestrator | 2025-10-09 10:20:39 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:39.318412 | orchestrator | 2025-10-09 10:20:39 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:39.320555 | orchestrator | 2025-10-09 10:20:39 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:39.321112 | orchestrator | 2025-10-09 10:20:39 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:39.321159 | orchestrator | 2025-10-09 10:20:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:42.380869 | orchestrator | 2025-10-09 10:20:42 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:42.382551 | orchestrator | 2025-10-09 10:20:42 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:42.388186 | orchestrator | 2025-10-09 10:20:42 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:42.394173 | orchestrator | 2025-10-09 10:20:42 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:42.396493 | orchestrator | 2025-10-09 10:20:42 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:42.396519 | orchestrator | 2025-10-09 10:20:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:45.467637 | orchestrator | 2025-10-09 10:20:45 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:45.476713 | orchestrator | 2025-10-09 10:20:45 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:45.488337 | orchestrator | 2025-10-09 10:20:45 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:45.490268 | orchestrator | 2025-10-09 10:20:45 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:45.494297 | orchestrator | 2025-10-09 10:20:45 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:45.494349 | orchestrator | 2025-10-09 10:20:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:48.535369 | orchestrator | 2025-10-09 10:20:48 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:48.536683 | orchestrator | 2025-10-09 10:20:48 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:48.536976 | orchestrator | 2025-10-09 10:20:48 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:48.537818 | orchestrator | 2025-10-09 10:20:48 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:48.539176 | orchestrator | 2025-10-09 10:20:48 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:48.539201 | orchestrator | 2025-10-09 10:20:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:51.601942 | orchestrator | 2025-10-09 10:20:51 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:51.603488 | orchestrator | 2025-10-09 10:20:51 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:51.605138 | orchestrator | 2025-10-09 10:20:51 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:51.607025 | orchestrator | 2025-10-09 10:20:51 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:51.608947 | orchestrator | 2025-10-09 10:20:51 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state STARTED 2025-10-09 10:20:51.609924 | orchestrator | 2025-10-09 10:20:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:54.681749 | orchestrator | 2025-10-09 10:20:54 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:54.681829 | orchestrator | 2025-10-09 10:20:54 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:54.685175 | orchestrator | 2025-10-09 10:20:54 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:54.685260 | orchestrator | 2025-10-09 10:20:54 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:54.687238 | orchestrator | 2025-10-09 10:20:54 | INFO  | Task 5673763f-2617-4d74-8587-a2e3e7a5dc44 is in state SUCCESS 2025-10-09 10:20:54.687267 | orchestrator | 2025-10-09 10:20:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:20:54.689063 | orchestrator | 2025-10-09 10:20:54.689100 | orchestrator | 2025-10-09 10:20:54.689112 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-10-09 10:20:54.689125 | orchestrator | 2025-10-09 10:20:54.689137 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-10-09 10:20:54.689150 | orchestrator | Thursday 09 October 2025 10:19:24 +0000 (0:00:01.144) 0:00:01.144 ****** 2025-10-09 10:20:54.689162 | orchestrator | ok: [testbed-manager] => { 2025-10-09 10:20:54.689176 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-10-09 10:20:54.689190 | orchestrator | } 2025-10-09 10:20:54.689201 | orchestrator | 2025-10-09 10:20:54.689219 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-10-09 10:20:54.689231 | orchestrator | Thursday 09 October 2025 10:19:25 +0000 (0:00:01.069) 0:00:02.213 ****** 2025-10-09 10:20:54.689242 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.689299 | orchestrator | 2025-10-09 10:20:54.689311 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-10-09 10:20:54.689323 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:02.196) 0:00:04.409 ****** 2025-10-09 10:20:54.689334 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-10-09 10:20:54.689345 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-10-09 10:20:54.689357 | orchestrator | 2025-10-09 10:20:54.689368 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-10-09 10:20:54.689380 | orchestrator | Thursday 09 October 2025 10:19:29 +0000 (0:00:01.471) 0:00:05.881 ****** 2025-10-09 10:20:54.689392 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.689403 | orchestrator | 2025-10-09 10:20:54.689414 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-10-09 10:20:54.689426 | orchestrator | Thursday 09 October 2025 10:19:33 +0000 (0:00:04.526) 0:00:10.407 ****** 2025-10-09 10:20:54.689437 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.689448 | orchestrator | 2025-10-09 10:20:54.689459 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-10-09 10:20:54.689470 | orchestrator | Thursday 09 October 2025 10:19:36 +0000 (0:00:02.622) 0:00:13.030 ****** 2025-10-09 10:20:54.689481 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-10-09 10:20:54.689492 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.689503 | orchestrator | 2025-10-09 10:20:54.689515 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-10-09 10:20:54.689526 | orchestrator | Thursday 09 October 2025 10:20:03 +0000 (0:00:26.944) 0:00:39.975 ****** 2025-10-09 10:20:54.689537 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.689548 | orchestrator | 2025-10-09 10:20:54.689559 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:20:54.689571 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.689585 | orchestrator | 2025-10-09 10:20:54.689596 | orchestrator | 2025-10-09 10:20:54.689607 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:20:54.689618 | orchestrator | Thursday 09 October 2025 10:20:07 +0000 (0:00:04.165) 0:00:44.140 ****** 2025-10-09 10:20:54.689630 | orchestrator | =============================================================================== 2025-10-09 10:20:54.689643 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.94s 2025-10-09 10:20:54.689655 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.53s 2025-10-09 10:20:54.689668 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.17s 2025-10-09 10:20:54.689680 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.62s 2025-10-09 10:20:54.689693 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.20s 2025-10-09 10:20:54.689705 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.47s 2025-10-09 10:20:54.689717 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 1.07s 2025-10-09 10:20:54.689730 | orchestrator | 2025-10-09 10:20:54.689742 | orchestrator | 2025-10-09 10:20:54.689754 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-10-09 10:20:54.689767 | orchestrator | 2025-10-09 10:20:54.689779 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-10-09 10:20:54.689791 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:01.177) 0:00:01.177 ****** 2025-10-09 10:20:54.689804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-10-09 10:20:54.689818 | orchestrator | 2025-10-09 10:20:54.689830 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-10-09 10:20:54.689843 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:00.301) 0:00:01.479 ****** 2025-10-09 10:20:54.689863 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-10-09 10:20:54.689876 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-10-09 10:20:54.689888 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-10-09 10:20:54.689900 | orchestrator | 2025-10-09 10:20:54.689912 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-10-09 10:20:54.689925 | orchestrator | Thursday 09 October 2025 10:19:30 +0000 (0:00:02.379) 0:00:03.859 ****** 2025-10-09 10:20:54.689937 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.689950 | orchestrator | 2025-10-09 10:20:54.689962 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-10-09 10:20:54.689975 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:04.168) 0:00:08.027 ****** 2025-10-09 10:20:54.690000 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-10-09 10:20:54.690012 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.690083 | orchestrator | 2025-10-09 10:20:54.690107 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-10-09 10:20:54.690119 | orchestrator | Thursday 09 October 2025 10:20:14 +0000 (0:00:39.996) 0:00:48.023 ****** 2025-10-09 10:20:54.690130 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.690141 | orchestrator | 2025-10-09 10:20:54.690152 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-10-09 10:20:54.690163 | orchestrator | Thursday 09 October 2025 10:20:17 +0000 (0:00:03.064) 0:00:51.088 ****** 2025-10-09 10:20:54.690174 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.690185 | orchestrator | 2025-10-09 10:20:54.690201 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-10-09 10:20:54.690213 | orchestrator | Thursday 09 October 2025 10:20:18 +0000 (0:00:01.311) 0:00:52.400 ****** 2025-10-09 10:20:54.690224 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.690235 | orchestrator | 2025-10-09 10:20:54.690246 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-10-09 10:20:54.690258 | orchestrator | Thursday 09 October 2025 10:20:23 +0000 (0:00:04.684) 0:00:57.084 ****** 2025-10-09 10:20:54.690286 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.690298 | orchestrator | 2025-10-09 10:20:54.690310 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-10-09 10:20:54.690321 | orchestrator | Thursday 09 October 2025 10:20:25 +0000 (0:00:02.131) 0:00:59.216 ****** 2025-10-09 10:20:54.690332 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.690343 | orchestrator | 2025-10-09 10:20:54.690354 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-10-09 10:20:54.690365 | orchestrator | Thursday 09 October 2025 10:20:26 +0000 (0:00:01.462) 0:01:00.678 ****** 2025-10-09 10:20:54.690376 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.690386 | orchestrator | 2025-10-09 10:20:54.690398 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:20:54.690409 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.690420 | orchestrator | 2025-10-09 10:20:54.690431 | orchestrator | 2025-10-09 10:20:54.690442 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:20:54.690453 | orchestrator | Thursday 09 October 2025 10:20:28 +0000 (0:00:01.231) 0:01:01.910 ****** 2025-10-09 10:20:54.690464 | orchestrator | =============================================================================== 2025-10-09 10:20:54.690475 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.00s 2025-10-09 10:20:54.690487 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.68s 2025-10-09 10:20:54.690498 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 4.17s 2025-10-09 10:20:54.690517 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.06s 2025-10-09 10:20:54.690528 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.38s 2025-10-09 10:20:54.690539 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.13s 2025-10-09 10:20:54.690550 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.46s 2025-10-09 10:20:54.690562 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.31s 2025-10-09 10:20:54.690573 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.23s 2025-10-09 10:20:54.690584 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.30s 2025-10-09 10:20:54.690595 | orchestrator | 2025-10-09 10:20:54.690606 | orchestrator | 2025-10-09 10:20:54.690617 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:20:54.690628 | orchestrator | 2025-10-09 10:20:54.690639 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:20:54.690650 | orchestrator | Thursday 09 October 2025 10:19:26 +0000 (0:00:00.859) 0:00:00.859 ****** 2025-10-09 10:20:54.690661 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-10-09 10:20:54.690672 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-10-09 10:20:54.690683 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-10-09 10:20:54.690694 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-10-09 10:20:54.690705 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-10-09 10:20:54.690716 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-10-09 10:20:54.690727 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-10-09 10:20:54.690738 | orchestrator | 2025-10-09 10:20:54.690749 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-10-09 10:20:54.690760 | orchestrator | 2025-10-09 10:20:54.690771 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-10-09 10:20:54.690782 | orchestrator | Thursday 09 October 2025 10:19:28 +0000 (0:00:02.046) 0:00:02.906 ****** 2025-10-09 10:20:54.690807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:20:54.690826 | orchestrator | 2025-10-09 10:20:54.690837 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-10-09 10:20:54.690849 | orchestrator | Thursday 09 October 2025 10:19:31 +0000 (0:00:02.775) 0:00:05.681 ****** 2025-10-09 10:20:54.690860 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:20:54.690871 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:20:54.690882 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:20:54.690893 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:20:54.690904 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.690922 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:20:54.690934 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:20:54.690945 | orchestrator | 2025-10-09 10:20:54.690956 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-10-09 10:20:54.690967 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:03.692) 0:00:09.374 ****** 2025-10-09 10:20:54.690978 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:20:54.690990 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:20:54.691001 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.691012 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:20:54.691023 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:20:54.691034 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:20:54.691045 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:20:54.691056 | orchestrator | 2025-10-09 10:20:54.691067 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-10-09 10:20:54.691079 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:04.611) 0:00:13.986 ****** 2025-10-09 10:20:54.691096 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:20:54.691108 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:20:54.691119 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:20:54.691130 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:20:54.691141 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:20:54.691152 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.691163 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:20:54.691174 | orchestrator | 2025-10-09 10:20:54.691186 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-10-09 10:20:54.691197 | orchestrator | Thursday 09 October 2025 10:19:41 +0000 (0:00:02.504) 0:00:16.490 ****** 2025-10-09 10:20:54.691208 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:20:54.691219 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:20:54.691230 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:20:54.691242 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:20:54.691253 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:20:54.691264 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:20:54.691305 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.691317 | orchestrator | 2025-10-09 10:20:54.691328 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-10-09 10:20:54.691339 | orchestrator | Thursday 09 October 2025 10:19:55 +0000 (0:00:13.736) 0:00:30.226 ****** 2025-10-09 10:20:54.691350 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:20:54.691360 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:20:54.691371 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:20:54.691382 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:20:54.691393 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:20:54.691404 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:20:54.691415 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.691426 | orchestrator | 2025-10-09 10:20:54.691436 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-10-09 10:20:54.691447 | orchestrator | Thursday 09 October 2025 10:20:26 +0000 (0:00:31.192) 0:01:01.419 ****** 2025-10-09 10:20:54.691490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2025-10-09 10:20:54.691503 | orchestrator | 2025-10-09 10:20:54.691514 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-10-09 10:20:54.691525 | orchestrator | Thursday 09 October 2025 10:20:28 +0000 (0:00:01.564) 0:01:02.983 ****** 2025-10-09 10:20:54.691536 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-10-09 10:20:54.691548 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-10-09 10:20:54.691559 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-10-09 10:20:54.691570 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-10-09 10:20:54.691581 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-10-09 10:20:54.691592 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-10-09 10:20:54.691603 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-10-09 10:20:54.691614 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-10-09 10:20:54.691624 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-10-09 10:20:54.691635 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-10-09 10:20:54.691646 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-10-09 10:20:54.691657 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-10-09 10:20:54.691668 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-10-09 10:20:54.691678 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-10-09 10:20:54.691689 | orchestrator | 2025-10-09 10:20:54.691700 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-10-09 10:20:54.691718 | orchestrator | Thursday 09 October 2025 10:20:37 +0000 (0:00:09.151) 0:01:12.135 ****** 2025-10-09 10:20:54.691729 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.691740 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:20:54.691751 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:20:54.691762 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:20:54.691773 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:20:54.691784 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:20:54.691794 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:20:54.691805 | orchestrator | 2025-10-09 10:20:54.691816 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-10-09 10:20:54.691827 | orchestrator | Thursday 09 October 2025 10:20:39 +0000 (0:00:01.744) 0:01:13.880 ****** 2025-10-09 10:20:54.691838 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:20:54.691849 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:20:54.691860 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:20:54.691871 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.691882 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:20:54.691892 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:20:54.691903 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:20:54.691914 | orchestrator | 2025-10-09 10:20:54.691924 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-10-09 10:20:54.691943 | orchestrator | Thursday 09 October 2025 10:20:41 +0000 (0:00:01.944) 0:01:15.824 ****** 2025-10-09 10:20:54.691954 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:20:54.691965 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:20:54.691975 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.691986 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:20:54.691997 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:20:54.692008 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:20:54.692018 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:20:54.692029 | orchestrator | 2025-10-09 10:20:54.692040 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-10-09 10:20:54.692051 | orchestrator | Thursday 09 October 2025 10:20:43 +0000 (0:00:01.858) 0:01:17.683 ****** 2025-10-09 10:20:54.692062 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:20:54.692073 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:20:54.692083 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:20:54.692098 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:20:54.692109 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:20:54.692120 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:54.692131 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:20:54.692142 | orchestrator | 2025-10-09 10:20:54.692153 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-10-09 10:20:54.692164 | orchestrator | Thursday 09 October 2025 10:20:46 +0000 (0:00:03.154) 0:01:20.837 ****** 2025-10-09 10:20:54.692175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-10-09 10:20:54.692187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:20:54.692199 | orchestrator | 2025-10-09 10:20:54.692209 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-10-09 10:20:54.692220 | orchestrator | Thursday 09 October 2025 10:20:47 +0000 (0:00:01.508) 0:01:22.346 ****** 2025-10-09 10:20:54.692231 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.692242 | orchestrator | 2025-10-09 10:20:54.692253 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-10-09 10:20:54.692264 | orchestrator | Thursday 09 October 2025 10:20:49 +0000 (0:00:02.180) 0:01:24.526 ****** 2025-10-09 10:20:54.692295 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:20:54.692306 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:20:54.692317 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:20:54.692328 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:20:54.692350 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:54.692361 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:20:54.692372 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:20:54.692383 | orchestrator | 2025-10-09 10:20:54.692394 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:20:54.692404 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.692416 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.692427 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.692438 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.692449 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.692460 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.692471 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:20:54.692482 | orchestrator | 2025-10-09 10:20:54.692493 | orchestrator | 2025-10-09 10:20:54.692504 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:20:54.692515 | orchestrator | Thursday 09 October 2025 10:20:52 +0000 (0:00:02.800) 0:01:27.327 ****** 2025-10-09 10:20:54.692526 | orchestrator | =============================================================================== 2025-10-09 10:20:54.692537 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 31.19s 2025-10-09 10:20:54.692548 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.74s 2025-10-09 10:20:54.692559 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 9.15s 2025-10-09 10:20:54.692570 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.61s 2025-10-09 10:20:54.692580 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.69s 2025-10-09 10:20:54.692591 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.15s 2025-10-09 10:20:54.692602 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.80s 2025-10-09 10:20:54.692613 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.78s 2025-10-09 10:20:54.692624 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.50s 2025-10-09 10:20:54.692635 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.18s 2025-10-09 10:20:54.692646 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.03s 2025-10-09 10:20:54.692662 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.94s 2025-10-09 10:20:54.692673 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.86s 2025-10-09 10:20:54.692684 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.74s 2025-10-09 10:20:54.692695 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.56s 2025-10-09 10:20:54.692706 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.51s 2025-10-09 10:20:57.747370 | orchestrator | 2025-10-09 10:20:57 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:20:57.756029 | orchestrator | 2025-10-09 10:20:57 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:20:57.756063 | orchestrator | 2025-10-09 10:20:57 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:20:57.777765 | orchestrator | 2025-10-09 10:20:57 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:20:57.778636 | orchestrator | 2025-10-09 10:20:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:00.830347 | orchestrator | 2025-10-09 10:21:00 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state STARTED 2025-10-09 10:21:00.830543 | orchestrator | 2025-10-09 10:21:00 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:00.830558 | orchestrator | 2025-10-09 10:21:00 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:00.830581 | orchestrator | 2025-10-09 10:21:00 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:00.830592 | orchestrator | 2025-10-09 10:21:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:03.900320 | orchestrator | 2025-10-09 10:21:03 | INFO  | Task b47c7c25-6d1d-4018-b12d-397b0501cbb6 is in state SUCCESS 2025-10-09 10:21:03.901814 | orchestrator | 2025-10-09 10:21:03 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:03.903414 | orchestrator | 2025-10-09 10:21:03 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:03.907454 | orchestrator | 2025-10-09 10:21:03 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:03.907479 | orchestrator | 2025-10-09 10:21:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:06.962223 | orchestrator | 2025-10-09 10:21:06 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:06.962693 | orchestrator | 2025-10-09 10:21:06 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:06.963591 | orchestrator | 2025-10-09 10:21:06 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:06.963923 | orchestrator | 2025-10-09 10:21:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:10.003301 | orchestrator | 2025-10-09 10:21:10 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:10.004031 | orchestrator | 2025-10-09 10:21:10 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:10.007160 | orchestrator | 2025-10-09 10:21:10 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:10.007189 | orchestrator | 2025-10-09 10:21:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:13.051784 | orchestrator | 2025-10-09 10:21:13 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:13.052591 | orchestrator | 2025-10-09 10:21:13 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:13.054507 | orchestrator | 2025-10-09 10:21:13 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:13.054708 | orchestrator | 2025-10-09 10:21:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:16.089862 | orchestrator | 2025-10-09 10:21:16 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:16.089958 | orchestrator | 2025-10-09 10:21:16 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:16.090844 | orchestrator | 2025-10-09 10:21:16 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:16.090868 | orchestrator | 2025-10-09 10:21:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:19.137565 | orchestrator | 2025-10-09 10:21:19 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:19.139858 | orchestrator | 2025-10-09 10:21:19 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:19.142508 | orchestrator | 2025-10-09 10:21:19 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:19.142548 | orchestrator | 2025-10-09 10:21:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:22.187662 | orchestrator | 2025-10-09 10:21:22 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:22.189593 | orchestrator | 2025-10-09 10:21:22 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:22.190746 | orchestrator | 2025-10-09 10:21:22 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:22.190769 | orchestrator | 2025-10-09 10:21:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:25.244168 | orchestrator | 2025-10-09 10:21:25 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:25.244823 | orchestrator | 2025-10-09 10:21:25 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:25.246253 | orchestrator | 2025-10-09 10:21:25 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:25.246577 | orchestrator | 2025-10-09 10:21:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:28.295614 | orchestrator | 2025-10-09 10:21:28 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:28.298590 | orchestrator | 2025-10-09 10:21:28 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:28.301168 | orchestrator | 2025-10-09 10:21:28 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:28.302688 | orchestrator | 2025-10-09 10:21:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:31.355167 | orchestrator | 2025-10-09 10:21:31 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:31.357674 | orchestrator | 2025-10-09 10:21:31 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:31.358666 | orchestrator | 2025-10-09 10:21:31 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:31.358691 | orchestrator | 2025-10-09 10:21:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:34.401164 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:34.405651 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:34.414121 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:34.414147 | orchestrator | 2025-10-09 10:21:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:37.460657 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:37.461782 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:37.463674 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:37.463772 | orchestrator | 2025-10-09 10:21:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:40.519007 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:40.521084 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:40.521810 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:40.521832 | orchestrator | 2025-10-09 10:21:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:43.562194 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:43.562709 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:43.563952 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:43.563976 | orchestrator | 2025-10-09 10:21:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:46.600402 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:46.600640 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:46.601439 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:46.601466 | orchestrator | 2025-10-09 10:21:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:49.646362 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:49.646491 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:49.646529 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:49.646541 | orchestrator | 2025-10-09 10:21:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:52.676531 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:52.676916 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:52.677944 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:52.678286 | orchestrator | 2025-10-09 10:21:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:55.714924 | orchestrator | 2025-10-09 10:21:55 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:55.715961 | orchestrator | 2025-10-09 10:21:55 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:55.718128 | orchestrator | 2025-10-09 10:21:55 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state STARTED 2025-10-09 10:21:55.718158 | orchestrator | 2025-10-09 10:21:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:58.751365 | orchestrator | 2025-10-09 10:21:58 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:21:58.751792 | orchestrator | 2025-10-09 10:21:58 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:21:58.752756 | orchestrator | 2025-10-09 10:21:58 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:21:58.753632 | orchestrator | 2025-10-09 10:21:58 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:21:58.755318 | orchestrator | 2025-10-09 10:21:58 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:21:58.761174 | orchestrator | 2025-10-09 10:21:58 | INFO  | Task 59cd6893-d4ee-41a6-940b-ba0cea59cbfe is in state SUCCESS 2025-10-09 10:21:58.764186 | orchestrator | 2025-10-09 10:21:58.764229 | orchestrator | 2025-10-09 10:21:58.764243 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-10-09 10:21:58.764255 | orchestrator | 2025-10-09 10:21:58.764305 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-10-09 10:21:58.764317 | orchestrator | Thursday 09 October 2025 10:19:50 +0000 (0:00:00.327) 0:00:00.327 ****** 2025-10-09 10:21:58.764328 | orchestrator | ok: [testbed-manager] 2025-10-09 10:21:58.764340 | orchestrator | 2025-10-09 10:21:58.764351 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-10-09 10:21:58.764361 | orchestrator | Thursday 09 October 2025 10:19:51 +0000 (0:00:01.489) 0:00:01.816 ****** 2025-10-09 10:21:58.764372 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-10-09 10:21:58.764383 | orchestrator | 2025-10-09 10:21:58.764394 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-10-09 10:21:58.764405 | orchestrator | Thursday 09 October 2025 10:19:52 +0000 (0:00:00.600) 0:00:02.417 ****** 2025-10-09 10:21:58.764416 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.764427 | orchestrator | 2025-10-09 10:21:58.764438 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-10-09 10:21:58.764448 | orchestrator | Thursday 09 October 2025 10:19:53 +0000 (0:00:01.590) 0:00:04.008 ****** 2025-10-09 10:21:58.764459 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-10-09 10:21:58.764470 | orchestrator | ok: [testbed-manager] 2025-10-09 10:21:58.764481 | orchestrator | 2025-10-09 10:21:58.764492 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-10-09 10:21:58.764502 | orchestrator | Thursday 09 October 2025 10:20:58 +0000 (0:01:04.505) 0:01:08.513 ****** 2025-10-09 10:21:58.764513 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.764524 | orchestrator | 2025-10-09 10:21:58.764534 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:21:58.764546 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:21:58.764559 | orchestrator | 2025-10-09 10:21:58.764570 | orchestrator | 2025-10-09 10:21:58.764580 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:21:58.764591 | orchestrator | Thursday 09 October 2025 10:21:02 +0000 (0:00:04.326) 0:01:12.840 ****** 2025-10-09 10:21:58.764602 | orchestrator | =============================================================================== 2025-10-09 10:21:58.764612 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 64.51s 2025-10-09 10:21:58.764623 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.33s 2025-10-09 10:21:58.764634 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.59s 2025-10-09 10:21:58.764644 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.49s 2025-10-09 10:21:58.764655 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.60s 2025-10-09 10:21:58.764665 | orchestrator | 2025-10-09 10:21:58.764676 | orchestrator | 2025-10-09 10:21:58.764687 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-10-09 10:21:58.764697 | orchestrator | 2025-10-09 10:21:58.764708 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-10-09 10:21:58.764726 | orchestrator | Thursday 09 October 2025 10:19:14 +0000 (0:00:00.346) 0:00:00.346 ****** 2025-10-09 10:21:58.764737 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:21:58.764749 | orchestrator | 2025-10-09 10:21:58.764760 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-10-09 10:21:58.764771 | orchestrator | Thursday 09 October 2025 10:19:15 +0000 (0:00:01.633) 0:00:01.979 ****** 2025-10-09 10:21:58.764795 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:21:58.764808 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:21:58.764820 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:21:58.764832 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:21:58.764844 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:21:58.764857 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:21:58.764868 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:21:58.764881 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:21:58.764893 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:21:58.764906 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:21:58.764919 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:21:58.764931 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:21:58.764943 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:21:58.764955 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:21:58.764968 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:21:58.764980 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:21:58.765005 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:21:58.765018 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:21:58.765031 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:21:58.765043 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:21:58.765055 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:21:58.765067 | orchestrator | 2025-10-09 10:21:58.765080 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-10-09 10:21:58.765092 | orchestrator | Thursday 09 October 2025 10:19:20 +0000 (0:00:04.753) 0:00:06.733 ****** 2025-10-09 10:21:58.765104 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:21:58.765117 | orchestrator | 2025-10-09 10:21:58.765130 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-10-09 10:21:58.765142 | orchestrator | Thursday 09 October 2025 10:19:22 +0000 (0:00:01.770) 0:00:08.503 ****** 2025-10-09 10:21:58.765158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.765174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.765201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.765213 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.765225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.765340 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.765392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.765426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765486 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.765548 | orchestrator | 2025-10-09 10:21:58.765559 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-10-09 10:21:58.765575 | orchestrator | Thursday 09 October 2025 10:19:28 +0000 (0:00:06.085) 0:00:14.589 ****** 2025-10-09 10:21:58.765587 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.765599 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765616 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765628 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:21:58.765639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.765655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765678 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:21:58.765690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.765715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.765756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765782 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:21:58.765792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.765802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765827 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:21:58.765838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.765853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765873 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:21:58.765883 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:21:58.765892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.765906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.765927 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:21:58.765937 | orchestrator | 2025-10-09 10:21:58.765946 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-10-09 10:21:58.765956 | orchestrator | Thursday 09 October 2025 10:19:30 +0000 (0:00:02.392) 0:00:16.981 ****** 2025-10-09 10:21:58.765966 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.765982 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766002 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766013 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:21:58.766089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.766100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766125 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:21:58.766135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.766145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766178 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:21:58.766188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.766198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766219 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:21:58.766232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.766243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766280 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:21:58.766290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.766312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766332 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:21:58.766342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:21:58.766352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.766373 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:21:58.766382 | orchestrator | 2025-10-09 10:21:58.766392 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-10-09 10:21:58.766402 | orchestrator | Thursday 09 October 2025 10:19:35 +0000 (0:00:04.702) 0:00:21.684 ****** 2025-10-09 10:21:58.766411 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:21:58.766421 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:21:58.766430 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:21:58.766440 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:21:58.766450 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:21:58.766459 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:21:58.766468 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:21:58.766478 | orchestrator | 2025-10-09 10:21:58.766488 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-10-09 10:21:58.766502 | orchestrator | Thursday 09 October 2025 10:19:37 +0000 (0:00:01.369) 0:00:23.053 ****** 2025-10-09 10:21:58.766512 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:21:58.766521 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:21:58.766531 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:21:58.766540 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:21:58.766550 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:21:58.766559 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:21:58.766569 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:21:58.766578 | orchestrator | 2025-10-09 10:21:58.766587 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-10-09 10:21:58.766597 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:02.654) 0:00:25.708 ****** 2025-10-09 10:21:58.766612 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.766627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.766638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.766648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.766662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.766687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.766698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.766714 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766840 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.766898 | orchestrator | 2025-10-09 10:21:58.766908 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-10-09 10:21:58.766918 | orchestrator | Thursday 09 October 2025 10:19:49 +0000 (0:00:10.208) 0:00:35.916 ****** 2025-10-09 10:21:58.766927 | orchestrator | [WARNING]: Skipped 2025-10-09 10:21:58.766937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-10-09 10:21:58.766947 | orchestrator | to this access issue: 2025-10-09 10:21:58.766957 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-10-09 10:21:58.766966 | orchestrator | directory 2025-10-09 10:21:58.766976 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:21:58.766986 | orchestrator | 2025-10-09 10:21:58.766995 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-10-09 10:21:58.767005 | orchestrator | Thursday 09 October 2025 10:19:52 +0000 (0:00:02.433) 0:00:38.350 ****** 2025-10-09 10:21:58.767015 | orchestrator | [WARNING]: Skipped 2025-10-09 10:21:58.767024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-10-09 10:21:58.767034 | orchestrator | to this access issue: 2025-10-09 10:21:58.767044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-10-09 10:21:58.767053 | orchestrator | directory 2025-10-09 10:21:58.767063 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:21:58.767073 | orchestrator | 2025-10-09 10:21:58.767082 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-10-09 10:21:58.767092 | orchestrator | Thursday 09 October 2025 10:19:53 +0000 (0:00:01.549) 0:00:39.900 ****** 2025-10-09 10:21:58.767102 | orchestrator | [WARNING]: Skipped 2025-10-09 10:21:58.767112 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-10-09 10:21:58.767121 | orchestrator | to this access issue: 2025-10-09 10:21:58.767131 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-10-09 10:21:58.767141 | orchestrator | directory 2025-10-09 10:21:58.767151 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:21:58.767161 | orchestrator | 2025-10-09 10:21:58.767176 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-10-09 10:21:58.767186 | orchestrator | Thursday 09 October 2025 10:19:55 +0000 (0:00:01.128) 0:00:41.029 ****** 2025-10-09 10:21:58.767196 | orchestrator | [WARNING]: Skipped 2025-10-09 10:21:58.767205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-10-09 10:21:58.767215 | orchestrator | to this access issue: 2025-10-09 10:21:58.767225 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-10-09 10:21:58.767234 | orchestrator | directory 2025-10-09 10:21:58.767244 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:21:58.767253 | orchestrator | 2025-10-09 10:21:58.767278 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-10-09 10:21:58.767288 | orchestrator | Thursday 09 October 2025 10:19:56 +0000 (0:00:01.001) 0:00:42.030 ****** 2025-10-09 10:21:58.767298 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:21:58.767308 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:21:58.767318 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:21:58.767327 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:21:58.767337 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.767346 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:21:58.767356 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:21:58.767366 | orchestrator | 2025-10-09 10:21:58.767381 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-10-09 10:21:58.767390 | orchestrator | Thursday 09 October 2025 10:20:03 +0000 (0:00:07.369) 0:00:49.400 ****** 2025-10-09 10:21:58.767400 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:21:58.767410 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:21:58.767419 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:21:58.767429 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:21:58.767439 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:21:58.767448 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:21:58.767457 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:21:58.767467 | orchestrator | 2025-10-09 10:21:58.767476 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-10-09 10:21:58.767486 | orchestrator | Thursday 09 October 2025 10:20:09 +0000 (0:00:06.068) 0:00:55.468 ****** 2025-10-09 10:21:58.767496 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:21:58.767506 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:21:58.767515 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.767525 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:21:58.767535 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:21:58.767544 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:21:58.767554 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:21:58.767563 | orchestrator | 2025-10-09 10:21:58.767577 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-10-09 10:21:58.767587 | orchestrator | Thursday 09 October 2025 10:20:13 +0000 (0:00:04.141) 0:00:59.610 ****** 2025-10-09 10:21:58.767597 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.767607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.767619 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.767648 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.767667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.767677 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.767687 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.767701 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.767711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.767722 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.767737 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.767753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.767763 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.767773 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.767783 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.767797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.767808 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.767818 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.767842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:21:58.767852 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.767862 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.767873 | orchestrator | 2025-10-09 10:21:58.767882 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-10-09 10:21:58.767892 | orchestrator | Thursday 09 October 2025 10:20:17 +0000 (0:00:03.725) 0:01:03.336 ****** 2025-10-09 10:21:58.767902 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:21:58.767911 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:21:58.767921 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:21:58.767930 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:21:58.767940 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:21:58.767949 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:21:58.767959 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:21:58.767968 | orchestrator | 2025-10-09 10:21:58.767978 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-10-09 10:21:58.767987 | orchestrator | Thursday 09 October 2025 10:20:22 +0000 (0:00:05.295) 0:01:08.631 ****** 2025-10-09 10:21:58.768000 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:21:58.768010 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:21:58.768019 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:21:58.768029 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:21:58.768038 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:21:58.768048 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:21:58.768057 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:21:58.768067 | orchestrator | 2025-10-09 10:21:58.768076 | orchestrator | TASK [common : Check common containers] **************************************** 2025-10-09 10:21:58.768085 | orchestrator | Thursday 09 October 2025 10:20:25 +0000 (0:00:03.349) 0:01:11.980 ****** 2025-10-09 10:21:58.768100 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.768111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.768127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.768138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768148 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.768162 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.768188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.768224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:21:58.768244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768332 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:21:58.768401 | orchestrator | 2025-10-09 10:21:58.768411 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-10-09 10:21:58.768421 | orchestrator | Thursday 09 October 2025 10:20:31 +0000 (0:00:05.701) 0:01:17.682 ****** 2025-10-09 10:21:58.768430 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.768440 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:21:58.768450 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:21:58.768459 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:21:58.768478 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:21:58.768488 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:21:58.768498 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:21:58.768507 | orchestrator | 2025-10-09 10:21:58.768517 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-10-09 10:21:58.768526 | orchestrator | Thursday 09 October 2025 10:20:35 +0000 (0:00:03.475) 0:01:21.157 ****** 2025-10-09 10:21:58.768536 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.768546 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:21:58.768555 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:21:58.768564 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:21:58.768574 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:21:58.768583 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:21:58.768593 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:21:58.768602 | orchestrator | 2025-10-09 10:21:58.768612 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:21:58.768622 | orchestrator | Thursday 09 October 2025 10:20:37 +0000 (0:00:02.651) 0:01:23.809 ****** 2025-10-09 10:21:58.768631 | orchestrator | 2025-10-09 10:21:58.768641 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:21:58.768651 | orchestrator | Thursday 09 October 2025 10:20:37 +0000 (0:00:00.146) 0:01:23.956 ****** 2025-10-09 10:21:58.768660 | orchestrator | 2025-10-09 10:21:58.768670 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:21:58.768679 | orchestrator | Thursday 09 October 2025 10:20:38 +0000 (0:00:00.126) 0:01:24.082 ****** 2025-10-09 10:21:58.768689 | orchestrator | 2025-10-09 10:21:58.768698 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:21:58.768708 | orchestrator | Thursday 09 October 2025 10:20:38 +0000 (0:00:00.093) 0:01:24.175 ****** 2025-10-09 10:21:58.768717 | orchestrator | 2025-10-09 10:21:58.768727 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:21:58.768736 | orchestrator | Thursday 09 October 2025 10:20:38 +0000 (0:00:00.412) 0:01:24.588 ****** 2025-10-09 10:21:58.768746 | orchestrator | 2025-10-09 10:21:58.768755 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:21:58.768765 | orchestrator | Thursday 09 October 2025 10:20:38 +0000 (0:00:00.091) 0:01:24.680 ****** 2025-10-09 10:21:58.768774 | orchestrator | 2025-10-09 10:21:58.768783 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:21:58.768793 | orchestrator | Thursday 09 October 2025 10:20:38 +0000 (0:00:00.078) 0:01:24.758 ****** 2025-10-09 10:21:58.768802 | orchestrator | 2025-10-09 10:21:58.768812 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-10-09 10:21:58.768821 | orchestrator | Thursday 09 October 2025 10:20:38 +0000 (0:00:00.109) 0:01:24.867 ****** 2025-10-09 10:21:58.768834 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:21:58.768842 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.768850 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:21:58.768858 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:21:58.768866 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:21:58.768874 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:21:58.768882 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:21:58.768890 | orchestrator | 2025-10-09 10:21:58.768898 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-10-09 10:21:58.768906 | orchestrator | Thursday 09 October 2025 10:21:16 +0000 (0:00:37.186) 0:02:02.054 ****** 2025-10-09 10:21:58.768914 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:21:58.768921 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:21:58.768929 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:21:58.768937 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:21:58.768945 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:21:58.768953 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:21:58.768961 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.768973 | orchestrator | 2025-10-09 10:21:58.768981 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-10-09 10:21:58.768989 | orchestrator | Thursday 09 October 2025 10:21:46 +0000 (0:00:30.257) 0:02:32.312 ****** 2025-10-09 10:21:58.768997 | orchestrator | ok: [testbed-manager] 2025-10-09 10:21:58.769005 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:21:58.769013 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:21:58.769021 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:21:58.769029 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:21:58.769036 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:21:58.769044 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:21:58.769052 | orchestrator | 2025-10-09 10:21:58.769060 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-10-09 10:21:58.769068 | orchestrator | Thursday 09 October 2025 10:21:48 +0000 (0:00:02.122) 0:02:34.434 ****** 2025-10-09 10:21:58.769076 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:58.769084 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:21:58.769092 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:21:58.769100 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:21:58.769108 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:21:58.769115 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:21:58.769123 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:21:58.769131 | orchestrator | 2025-10-09 10:21:58.769139 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:21:58.769147 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:21:58.769155 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:21:58.769163 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:21:58.769171 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:21:58.769182 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:21:58.769191 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:21:58.769198 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:21:58.769206 | orchestrator | 2025-10-09 10:21:58.769214 | orchestrator | 2025-10-09 10:21:58.769222 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:21:58.769230 | orchestrator | Thursday 09 October 2025 10:21:57 +0000 (0:00:08.893) 0:02:43.328 ****** 2025-10-09 10:21:58.769238 | orchestrator | =============================================================================== 2025-10-09 10:21:58.769246 | orchestrator | common : Restart fluentd container ------------------------------------- 37.19s 2025-10-09 10:21:58.769254 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.26s 2025-10-09 10:21:58.769276 | orchestrator | common : Copying over config.json files for services ------------------- 10.21s 2025-10-09 10:21:58.769284 | orchestrator | common : Restart cron container ----------------------------------------- 8.89s 2025-10-09 10:21:58.769292 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.37s 2025-10-09 10:21:58.769300 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.09s 2025-10-09 10:21:58.769307 | orchestrator | common : Copying over cron logrotate config file ------------------------ 6.07s 2025-10-09 10:21:58.769315 | orchestrator | common : Check common containers ---------------------------------------- 5.70s 2025-10-09 10:21:58.769330 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 5.30s 2025-10-09 10:21:58.769338 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.75s 2025-10-09 10:21:58.769346 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.70s 2025-10-09 10:21:58.769354 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.14s 2025-10-09 10:21:58.769362 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.73s 2025-10-09 10:21:58.769370 | orchestrator | common : Creating log volume -------------------------------------------- 3.48s 2025-10-09 10:21:58.769381 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.35s 2025-10-09 10:21:58.769390 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.65s 2025-10-09 10:21:58.769398 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 2.65s 2025-10-09 10:21:58.769405 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.43s 2025-10-09 10:21:58.769413 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.39s 2025-10-09 10:21:58.769421 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.12s 2025-10-09 10:21:58.769429 | orchestrator | 2025-10-09 10:21:58 | INFO  | Task 050d38ee-2f0b-40d1-8202-9272feaf5f34 is in state STARTED 2025-10-09 10:21:58.769437 | orchestrator | 2025-10-09 10:21:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:01.792835 | orchestrator | 2025-10-09 10:22:01 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:01.793146 | orchestrator | 2025-10-09 10:22:01 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:01.793613 | orchestrator | 2025-10-09 10:22:01 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:01.794285 | orchestrator | 2025-10-09 10:22:01 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:01.795031 | orchestrator | 2025-10-09 10:22:01 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:01.795588 | orchestrator | 2025-10-09 10:22:01 | INFO  | Task 050d38ee-2f0b-40d1-8202-9272feaf5f34 is in state STARTED 2025-10-09 10:22:01.795610 | orchestrator | 2025-10-09 10:22:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:04.820053 | orchestrator | 2025-10-09 10:22:04 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:04.820340 | orchestrator | 2025-10-09 10:22:04 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:04.820972 | orchestrator | 2025-10-09 10:22:04 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:04.821750 | orchestrator | 2025-10-09 10:22:04 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:04.822352 | orchestrator | 2025-10-09 10:22:04 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:04.823888 | orchestrator | 2025-10-09 10:22:04 | INFO  | Task 050d38ee-2f0b-40d1-8202-9272feaf5f34 is in state STARTED 2025-10-09 10:22:04.823929 | orchestrator | 2025-10-09 10:22:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:07.843434 | orchestrator | 2025-10-09 10:22:07 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:07.843537 | orchestrator | 2025-10-09 10:22:07 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:07.844480 | orchestrator | 2025-10-09 10:22:07 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:07.845466 | orchestrator | 2025-10-09 10:22:07 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:07.846737 | orchestrator | 2025-10-09 10:22:07 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:07.847293 | orchestrator | 2025-10-09 10:22:07 | INFO  | Task 050d38ee-2f0b-40d1-8202-9272feaf5f34 is in state STARTED 2025-10-09 10:22:07.847337 | orchestrator | 2025-10-09 10:22:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:10.892971 | orchestrator | 2025-10-09 10:22:10 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:10.893756 | orchestrator | 2025-10-09 10:22:10 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:10.894609 | orchestrator | 2025-10-09 10:22:10 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:10.897664 | orchestrator | 2025-10-09 10:22:10 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:10.898392 | orchestrator | 2025-10-09 10:22:10 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:10.901901 | orchestrator | 2025-10-09 10:22:10 | INFO  | Task 050d38ee-2f0b-40d1-8202-9272feaf5f34 is in state STARTED 2025-10-09 10:22:10.901930 | orchestrator | 2025-10-09 10:22:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:13.938136 | orchestrator | 2025-10-09 10:22:13 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:13.939146 | orchestrator | 2025-10-09 10:22:13 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:13.940517 | orchestrator | 2025-10-09 10:22:13 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:13.941113 | orchestrator | 2025-10-09 10:22:13 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:13.942188 | orchestrator | 2025-10-09 10:22:13 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:13.942685 | orchestrator | 2025-10-09 10:22:13 | INFO  | Task 050d38ee-2f0b-40d1-8202-9272feaf5f34 is in state STARTED 2025-10-09 10:22:13.942709 | orchestrator | 2025-10-09 10:22:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:16.970104 | orchestrator | 2025-10-09 10:22:16 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:16.970362 | orchestrator | 2025-10-09 10:22:16 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:16.971073 | orchestrator | 2025-10-09 10:22:16 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:16.971722 | orchestrator | 2025-10-09 10:22:16 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:16.975251 | orchestrator | 2025-10-09 10:22:16 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:16.975678 | orchestrator | 2025-10-09 10:22:16 | INFO  | Task 050d38ee-2f0b-40d1-8202-9272feaf5f34 is in state SUCCESS 2025-10-09 10:22:16.975745 | orchestrator | 2025-10-09 10:22:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:20.020056 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:20.023385 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:20.024705 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:20.026702 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:20.029160 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:20.032475 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:20.032587 | orchestrator | 2025-10-09 10:22:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:23.118307 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:23.120513 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:23.120603 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:23.125638 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:23.125665 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:23.125676 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:23.125688 | orchestrator | 2025-10-09 10:22:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:26.206399 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:26.207877 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:26.214886 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:26.214930 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:26.214941 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:26.215895 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:26.215916 | orchestrator | 2025-10-09 10:22:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:29.334161 | orchestrator | 2025-10-09 10:22:29 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state STARTED 2025-10-09 10:22:29.336894 | orchestrator | 2025-10-09 10:22:29 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:29.337623 | orchestrator | 2025-10-09 10:22:29 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:29.339407 | orchestrator | 2025-10-09 10:22:29 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:29.340816 | orchestrator | 2025-10-09 10:22:29 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:29.341959 | orchestrator | 2025-10-09 10:22:29 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:29.341981 | orchestrator | 2025-10-09 10:22:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:32.391171 | orchestrator | 2025-10-09 10:22:32 | INFO  | Task f0e9f1d1-cb74-4ffb-a8e9-6f12b1426153 is in state SUCCESS 2025-10-09 10:22:32.392707 | orchestrator | 2025-10-09 10:22:32.392750 | orchestrator | 2025-10-09 10:22:32.392762 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:22:32.392773 | orchestrator | 2025-10-09 10:22:32.392784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:22:32.392818 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.282) 0:00:00.282 ****** 2025-10-09 10:22:32.392829 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:22:32.392840 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:22:32.392850 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:22:32.392860 | orchestrator | 2025-10-09 10:22:32.392870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:22:32.392880 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.299) 0:00:00.582 ****** 2025-10-09 10:22:32.392890 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-10-09 10:22:32.392900 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-10-09 10:22:32.392910 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-10-09 10:22:32.392919 | orchestrator | 2025-10-09 10:22:32.392929 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-10-09 10:22:32.392939 | orchestrator | 2025-10-09 10:22:32.392949 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-10-09 10:22:32.392959 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.451) 0:00:01.033 ****** 2025-10-09 10:22:32.392968 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:22:32.392979 | orchestrator | 2025-10-09 10:22:32.392988 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-10-09 10:22:32.392998 | orchestrator | Thursday 09 October 2025 10:22:03 +0000 (0:00:00.544) 0:00:01.577 ****** 2025-10-09 10:22:32.393008 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-10-09 10:22:32.393018 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-10-09 10:22:32.393028 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-10-09 10:22:32.393037 | orchestrator | 2025-10-09 10:22:32.393047 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-10-09 10:22:32.393070 | orchestrator | Thursday 09 October 2025 10:22:04 +0000 (0:00:00.862) 0:00:02.440 ****** 2025-10-09 10:22:32.393080 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-10-09 10:22:32.393090 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-10-09 10:22:32.393099 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-10-09 10:22:32.393109 | orchestrator | 2025-10-09 10:22:32.393118 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-10-09 10:22:32.393128 | orchestrator | Thursday 09 October 2025 10:22:06 +0000 (0:00:02.108) 0:00:04.548 ****** 2025-10-09 10:22:32.393137 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:32.393147 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:32.393157 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:32.393167 | orchestrator | 2025-10-09 10:22:32.393177 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-10-09 10:22:32.393186 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:01.881) 0:00:06.430 ****** 2025-10-09 10:22:32.393196 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:32.393205 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:32.393215 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:32.393225 | orchestrator | 2025-10-09 10:22:32.393234 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:22:32.393244 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:32.393297 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:32.393309 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:32.393320 | orchestrator | 2025-10-09 10:22:32.393331 | orchestrator | 2025-10-09 10:22:32.393342 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:22:32.393361 | orchestrator | Thursday 09 October 2025 10:22:16 +0000 (0:00:07.902) 0:00:14.333 ****** 2025-10-09 10:22:32.393371 | orchestrator | =============================================================================== 2025-10-09 10:22:32.393382 | orchestrator | memcached : Restart memcached container --------------------------------- 7.90s 2025-10-09 10:22:32.393393 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.11s 2025-10-09 10:22:32.393403 | orchestrator | memcached : Check memcached container ----------------------------------- 1.88s 2025-10-09 10:22:32.393414 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.86s 2025-10-09 10:22:32.393425 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.54s 2025-10-09 10:22:32.393435 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-10-09 10:22:32.393446 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-10-09 10:22:32.393456 | orchestrator | 2025-10-09 10:22:32.393467 | orchestrator | 2025-10-09 10:22:32.393478 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:22:32.393490 | orchestrator | 2025-10-09 10:22:32.393500 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:22:32.393511 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.279) 0:00:00.279 ****** 2025-10-09 10:22:32.393522 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:22:32.393532 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:22:32.393543 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:22:32.393554 | orchestrator | 2025-10-09 10:22:32.393565 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:22:32.393589 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.333) 0:00:00.612 ****** 2025-10-09 10:22:32.393600 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-10-09 10:22:32.393611 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-10-09 10:22:32.393622 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-10-09 10:22:32.393633 | orchestrator | 2025-10-09 10:22:32.393644 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-10-09 10:22:32.393654 | orchestrator | 2025-10-09 10:22:32.393664 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-10-09 10:22:32.393673 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.446) 0:00:01.059 ****** 2025-10-09 10:22:32.393683 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:22:32.393692 | orchestrator | 2025-10-09 10:22:32.393702 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-10-09 10:22:32.393711 | orchestrator | Thursday 09 October 2025 10:22:03 +0000 (0:00:00.551) 0:00:01.611 ****** 2025-10-09 10:22:32.393724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393813 | orchestrator | 2025-10-09 10:22:32.393823 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-10-09 10:22:32.393833 | orchestrator | Thursday 09 October 2025 10:22:04 +0000 (0:00:01.357) 0:00:02.968 ****** 2025-10-09 10:22:32.393843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393923 | orchestrator | 2025-10-09 10:22:32.393933 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-10-09 10:22:32.393943 | orchestrator | Thursday 09 October 2025 10:22:07 +0000 (0:00:02.527) 0:00:05.496 ****** 2025-10-09 10:22:32.393953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.393994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394084 | orchestrator | 2025-10-09 10:22:32.394094 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-10-09 10:22:32.394104 | orchestrator | Thursday 09 October 2025 10:22:10 +0000 (0:00:03.108) 0:00:08.605 ****** 2025-10-09 10:22:32.394114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250928', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:22:32.394195 | orchestrator | 2025-10-09 10:22:32.394205 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-09 10:22:32.394215 | orchestrator | Thursday 09 October 2025 10:22:12 +0000 (0:00:02.127) 0:00:10.732 ****** 2025-10-09 10:22:32.394225 | orchestrator | 2025-10-09 10:22:32.394235 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-09 10:22:32.394245 | orchestrator | Thursday 09 October 2025 10:22:12 +0000 (0:00:00.109) 0:00:10.842 ****** 2025-10-09 10:22:32.394273 | orchestrator | 2025-10-09 10:22:32.394283 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-09 10:22:32.394293 | orchestrator | Thursday 09 October 2025 10:22:12 +0000 (0:00:00.081) 0:00:10.924 ****** 2025-10-09 10:22:32.394303 | orchestrator | 2025-10-09 10:22:32.394313 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-10-09 10:22:32.394336 | orchestrator | Thursday 09 October 2025 10:22:12 +0000 (0:00:00.078) 0:00:11.003 ****** 2025-10-09 10:22:32.394346 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:32.394356 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:32.394365 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:32.394375 | orchestrator | 2025-10-09 10:22:32.394385 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-10-09 10:22:32.394394 | orchestrator | Thursday 09 October 2025 10:22:19 +0000 (0:00:07.008) 0:00:18.011 ****** 2025-10-09 10:22:32.394404 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:32.394455 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:32.394467 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:32.394477 | orchestrator | 2025-10-09 10:22:32.394487 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:22:32.394497 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:32.394512 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:32.394522 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:32.394532 | orchestrator | 2025-10-09 10:22:32.394541 | orchestrator | 2025-10-09 10:22:32.394551 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:22:32.394560 | orchestrator | Thursday 09 October 2025 10:22:30 +0000 (0:00:10.172) 0:00:28.184 ****** 2025-10-09 10:22:32.394570 | orchestrator | =============================================================================== 2025-10-09 10:22:32.394580 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.17s 2025-10-09 10:22:32.394589 | orchestrator | redis : Restart redis container ----------------------------------------- 7.01s 2025-10-09 10:22:32.394599 | orchestrator | redis : Copying over redis config files --------------------------------- 3.11s 2025-10-09 10:22:32.394608 | orchestrator | redis : Copying over default config.json files -------------------------- 2.53s 2025-10-09 10:22:32.394618 | orchestrator | redis : Check redis containers ------------------------------------------ 2.13s 2025-10-09 10:22:32.394628 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.36s 2025-10-09 10:22:32.394637 | orchestrator | redis : include_tasks --------------------------------------------------- 0.55s 2025-10-09 10:22:32.394647 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-10-09 10:22:32.394656 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-10-09 10:22:32.394666 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.27s 2025-10-09 10:22:32.394676 | orchestrator | 2025-10-09 10:22:32 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:32.394755 | orchestrator | 2025-10-09 10:22:32 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:32.394970 | orchestrator | 2025-10-09 10:22:32 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:32.396118 | orchestrator | 2025-10-09 10:22:32 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:32.397315 | orchestrator | 2025-10-09 10:22:32 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:32.397397 | orchestrator | 2025-10-09 10:22:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:35.437971 | orchestrator | 2025-10-09 10:22:35 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:35.438606 | orchestrator | 2025-10-09 10:22:35 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:35.439416 | orchestrator | 2025-10-09 10:22:35 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:35.440143 | orchestrator | 2025-10-09 10:22:35 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:35.442761 | orchestrator | 2025-10-09 10:22:35 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:35.442804 | orchestrator | 2025-10-09 10:22:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:38.513371 | orchestrator | 2025-10-09 10:22:38 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:38.513482 | orchestrator | 2025-10-09 10:22:38 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:38.513498 | orchestrator | 2025-10-09 10:22:38 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:38.513510 | orchestrator | 2025-10-09 10:22:38 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:38.513521 | orchestrator | 2025-10-09 10:22:38 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:38.513533 | orchestrator | 2025-10-09 10:22:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:41.613189 | orchestrator | 2025-10-09 10:22:41 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:41.613711 | orchestrator | 2025-10-09 10:22:41 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:41.617218 | orchestrator | 2025-10-09 10:22:41 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:41.617958 | orchestrator | 2025-10-09 10:22:41 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:41.618966 | orchestrator | 2025-10-09 10:22:41 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:41.618989 | orchestrator | 2025-10-09 10:22:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:44.695121 | orchestrator | 2025-10-09 10:22:44 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:44.698096 | orchestrator | 2025-10-09 10:22:44 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:44.699371 | orchestrator | 2025-10-09 10:22:44 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:44.700308 | orchestrator | 2025-10-09 10:22:44 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:44.702478 | orchestrator | 2025-10-09 10:22:44 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:44.702503 | orchestrator | 2025-10-09 10:22:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:47.749176 | orchestrator | 2025-10-09 10:22:47 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:47.750773 | orchestrator | 2025-10-09 10:22:47 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:47.751990 | orchestrator | 2025-10-09 10:22:47 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:47.752598 | orchestrator | 2025-10-09 10:22:47 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:47.753794 | orchestrator | 2025-10-09 10:22:47 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:47.753810 | orchestrator | 2025-10-09 10:22:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:50.884375 | orchestrator | 2025-10-09 10:22:50 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:50.884477 | orchestrator | 2025-10-09 10:22:50 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:50.884492 | orchestrator | 2025-10-09 10:22:50 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:50.884504 | orchestrator | 2025-10-09 10:22:50 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:50.884515 | orchestrator | 2025-10-09 10:22:50 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:50.884527 | orchestrator | 2025-10-09 10:22:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:53.875985 | orchestrator | 2025-10-09 10:22:53 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:53.876091 | orchestrator | 2025-10-09 10:22:53 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:53.876817 | orchestrator | 2025-10-09 10:22:53 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:53.877605 | orchestrator | 2025-10-09 10:22:53 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:53.882200 | orchestrator | 2025-10-09 10:22:53 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:53.882226 | orchestrator | 2025-10-09 10:22:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:56.920944 | orchestrator | 2025-10-09 10:22:56 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:22:56.921495 | orchestrator | 2025-10-09 10:22:56 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:22:56.923530 | orchestrator | 2025-10-09 10:22:56 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:22:56.924893 | orchestrator | 2025-10-09 10:22:56 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:22:56.928670 | orchestrator | 2025-10-09 10:22:56 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:22:56.928694 | orchestrator | 2025-10-09 10:22:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:00.015085 | orchestrator | 2025-10-09 10:22:59 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:00.015185 | orchestrator | 2025-10-09 10:22:59 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:23:00.015201 | orchestrator | 2025-10-09 10:22:59 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:00.015214 | orchestrator | 2025-10-09 10:22:59 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:00.015226 | orchestrator | 2025-10-09 10:22:59 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:00.015238 | orchestrator | 2025-10-09 10:22:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:03.061991 | orchestrator | 2025-10-09 10:23:03 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:03.063012 | orchestrator | 2025-10-09 10:23:03 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:23:03.067675 | orchestrator | 2025-10-09 10:23:03 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:03.072114 | orchestrator | 2025-10-09 10:23:03 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:03.075950 | orchestrator | 2025-10-09 10:23:03 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:03.076130 | orchestrator | 2025-10-09 10:23:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:06.192187 | orchestrator | 2025-10-09 10:23:06 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:06.196420 | orchestrator | 2025-10-09 10:23:06 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:23:06.197370 | orchestrator | 2025-10-09 10:23:06 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:06.198209 | orchestrator | 2025-10-09 10:23:06 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:06.199025 | orchestrator | 2025-10-09 10:23:06 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:06.199049 | orchestrator | 2025-10-09 10:23:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:09.257200 | orchestrator | 2025-10-09 10:23:09 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:09.257335 | orchestrator | 2025-10-09 10:23:09 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:23:09.259512 | orchestrator | 2025-10-09 10:23:09 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:09.260434 | orchestrator | 2025-10-09 10:23:09 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:09.263657 | orchestrator | 2025-10-09 10:23:09 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:09.263866 | orchestrator | 2025-10-09 10:23:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:12.411204 | orchestrator | 2025-10-09 10:23:12 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:12.414309 | orchestrator | 2025-10-09 10:23:12 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:23:12.415864 | orchestrator | 2025-10-09 10:23:12 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:12.417755 | orchestrator | 2025-10-09 10:23:12 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:12.418651 | orchestrator | 2025-10-09 10:23:12 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:12.420004 | orchestrator | 2025-10-09 10:23:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:15.528755 | orchestrator | 2025-10-09 10:23:15 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:15.528868 | orchestrator | 2025-10-09 10:23:15 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:23:15.528884 | orchestrator | 2025-10-09 10:23:15 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:15.528896 | orchestrator | 2025-10-09 10:23:15 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:15.528907 | orchestrator | 2025-10-09 10:23:15 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:15.528918 | orchestrator | 2025-10-09 10:23:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:18.582380 | orchestrator | 2025-10-09 10:23:18 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:18.582481 | orchestrator | 2025-10-09 10:23:18 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state STARTED 2025-10-09 10:23:18.582497 | orchestrator | 2025-10-09 10:23:18 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:18.582536 | orchestrator | 2025-10-09 10:23:18 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:18.582548 | orchestrator | 2025-10-09 10:23:18 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:18.582575 | orchestrator | 2025-10-09 10:23:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:21.656934 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:21.658149 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task 8c69ebb1-229f-4ffe-95f7-506464f3eace is in state SUCCESS 2025-10-09 10:23:21.660278 | orchestrator | 2025-10-09 10:23:21.660368 | orchestrator | 2025-10-09 10:23:21.660383 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:23:21.660396 | orchestrator | 2025-10-09 10:23:21.660408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:23:21.660419 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.331) 0:00:00.331 ****** 2025-10-09 10:23:21.660431 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:21.660443 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:21.660454 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:21.660465 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:21.660476 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:21.660487 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:21.660497 | orchestrator | 2025-10-09 10:23:21.660509 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:23:21.660520 | orchestrator | Thursday 09 October 2025 10:22:03 +0000 (0:00:00.753) 0:00:01.085 ****** 2025-10-09 10:23:21.660531 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:23:21.660542 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:23:21.660553 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:23:21.660564 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:23:21.660575 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:23:21.660585 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:23:21.660596 | orchestrator | 2025-10-09 10:23:21.660607 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-10-09 10:23:21.660618 | orchestrator | 2025-10-09 10:23:21.660629 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-10-09 10:23:21.660640 | orchestrator | Thursday 09 October 2025 10:22:03 +0000 (0:00:00.706) 0:00:01.792 ****** 2025-10-09 10:23:21.660652 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:23:21.660664 | orchestrator | 2025-10-09 10:23:21.660675 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-09 10:23:21.660686 | orchestrator | Thursday 09 October 2025 10:22:05 +0000 (0:00:01.455) 0:00:03.247 ****** 2025-10-09 10:23:21.660697 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-10-09 10:23:21.660709 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-10-09 10:23:21.660720 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-10-09 10:23:21.660731 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-10-09 10:23:21.660742 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-10-09 10:23:21.660753 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-10-09 10:23:21.660764 | orchestrator | 2025-10-09 10:23:21.660775 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-09 10:23:21.660786 | orchestrator | Thursday 09 October 2025 10:22:06 +0000 (0:00:01.376) 0:00:04.623 ****** 2025-10-09 10:23:21.660833 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-10-09 10:23:21.660845 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-10-09 10:23:21.660856 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-10-09 10:23:21.660867 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-10-09 10:23:21.660877 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-10-09 10:23:21.660888 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-10-09 10:23:21.660898 | orchestrator | 2025-10-09 10:23:21.660909 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-09 10:23:21.660920 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:01.703) 0:00:06.327 ****** 2025-10-09 10:23:21.660931 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-10-09 10:23:21.660942 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:21.660953 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-10-09 10:23:21.660964 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:21.660974 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-10-09 10:23:21.660985 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-10-09 10:23:21.660995 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:21.661006 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-10-09 10:23:21.661017 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:21.661027 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:21.661038 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-10-09 10:23:21.661049 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:21.661059 | orchestrator | 2025-10-09 10:23:21.661070 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-10-09 10:23:21.661080 | orchestrator | Thursday 09 October 2025 10:22:10 +0000 (0:00:01.733) 0:00:08.060 ****** 2025-10-09 10:23:21.661091 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:21.661102 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:21.661112 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:21.661123 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:21.661133 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:21.661144 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:21.661155 | orchestrator | 2025-10-09 10:23:21.661173 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-10-09 10:23:21.661184 | orchestrator | Thursday 09 October 2025 10:22:11 +0000 (0:00:01.282) 0:00:09.343 ****** 2025-10-09 10:23:21.661215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661441 | orchestrator | 2025-10-09 10:23:21.661452 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-10-09 10:23:21.661463 | orchestrator | Thursday 09 October 2025 10:22:13 +0000 (0:00:02.192) 0:00:11.535 ****** 2025-10-09 10:23:21.661475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661649 | orchestrator | 2025-10-09 10:23:21.661660 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-10-09 10:23:21.661671 | orchestrator | Thursday 09 October 2025 10:22:16 +0000 (0:00:03.117) 0:00:14.653 ****** 2025-10-09 10:23:21.661682 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:21.661693 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:21.661704 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:21.661714 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:21.661725 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:21.661735 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:21.661746 | orchestrator | 2025-10-09 10:23:21.661757 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-10-09 10:23:21.661768 | orchestrator | Thursday 09 October 2025 10:22:18 +0000 (0:00:01.973) 0:00:16.626 ****** 2025-10-09 10:23:21.661779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:23:21.661956 | orchestrator | 2025-10-09 10:23:21.661966 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:23:21.661977 | orchestrator | Thursday 09 October 2025 10:22:21 +0000 (0:00:02.795) 0:00:19.421 ****** 2025-10-09 10:23:21.661988 | orchestrator | 2025-10-09 10:23:21.661999 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:23:21.662010 | orchestrator | Thursday 09 October 2025 10:22:22 +0000 (0:00:01.122) 0:00:20.543 ****** 2025-10-09 10:23:21.662091 | orchestrator | 2025-10-09 10:23:21.662104 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:23:21.662116 | orchestrator | Thursday 09 October 2025 10:22:22 +0000 (0:00:00.391) 0:00:20.935 ****** 2025-10-09 10:23:21.662127 | orchestrator | 2025-10-09 10:23:21.662138 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:23:21.662150 | orchestrator | Thursday 09 October 2025 10:22:23 +0000 (0:00:00.397) 0:00:21.333 ****** 2025-10-09 10:23:21.662161 | orchestrator | 2025-10-09 10:23:21.662172 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:23:21.662183 | orchestrator | Thursday 09 October 2025 10:22:23 +0000 (0:00:00.151) 0:00:21.485 ****** 2025-10-09 10:23:21.662195 | orchestrator | 2025-10-09 10:23:21.662206 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:23:21.662217 | orchestrator | Thursday 09 October 2025 10:22:23 +0000 (0:00:00.163) 0:00:21.648 ****** 2025-10-09 10:23:21.662228 | orchestrator | 2025-10-09 10:23:21.662239 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-10-09 10:23:21.662305 | orchestrator | Thursday 09 October 2025 10:22:23 +0000 (0:00:00.140) 0:00:21.789 ****** 2025-10-09 10:23:21.662316 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:21.662327 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:21.662338 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:21.662348 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:21.662359 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:21.662378 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:21.662388 | orchestrator | 2025-10-09 10:23:21.662400 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-10-09 10:23:21.662410 | orchestrator | Thursday 09 October 2025 10:22:35 +0000 (0:00:11.941) 0:00:33.734 ****** 2025-10-09 10:23:21.662421 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:21.662432 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:21.662442 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:21.662453 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:21.662464 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:21.662474 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:21.662485 | orchestrator | 2025-10-09 10:23:21.662495 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-10-09 10:23:21.662506 | orchestrator | Thursday 09 October 2025 10:22:37 +0000 (0:00:01.830) 0:00:35.565 ****** 2025-10-09 10:23:21.662517 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:21.662527 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:21.662538 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:21.662555 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:21.662623 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:21.662637 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:21.662648 | orchestrator | 2025-10-09 10:23:21.662658 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-10-09 10:23:21.662669 | orchestrator | Thursday 09 October 2025 10:22:49 +0000 (0:00:11.689) 0:00:47.254 ****** 2025-10-09 10:23:21.662690 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-10-09 10:23:21.662702 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-10-09 10:23:21.662713 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-10-09 10:23:21.662724 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-10-09 10:23:21.662734 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-10-09 10:23:21.662745 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-10-09 10:23:21.662755 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-10-09 10:23:21.662766 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-10-09 10:23:21.662777 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-10-09 10:23:21.662787 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-10-09 10:23:21.662798 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-10-09 10:23:21.662807 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-10-09 10:23:21.662817 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:23:21.662826 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:23:21.662836 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:23:21.662845 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:23:21.662854 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:23:21.662875 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:23:21.662884 | orchestrator | 2025-10-09 10:23:21.662894 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-10-09 10:23:21.662904 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:09.508) 0:00:56.763 ****** 2025-10-09 10:23:21.662913 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-10-09 10:23:21.662923 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:21.662932 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-10-09 10:23:21.662942 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:21.662951 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-10-09 10:23:21.662960 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-10-09 10:23:21.662970 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:21.662979 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-10-09 10:23:21.662989 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-10-09 10:23:21.662998 | orchestrator | 2025-10-09 10:23:21.663008 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-10-09 10:23:21.663017 | orchestrator | Thursday 09 October 2025 10:23:01 +0000 (0:00:03.190) 0:00:59.953 ****** 2025-10-09 10:23:21.663027 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-10-09 10:23:21.663036 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:21.663046 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-10-09 10:23:21.663055 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:21.663065 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-10-09 10:23:21.663074 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:21.663084 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-10-09 10:23:21.663093 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-10-09 10:23:21.663103 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-10-09 10:23:21.663112 | orchestrator | 2025-10-09 10:23:21.663121 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-10-09 10:23:21.663131 | orchestrator | Thursday 09 October 2025 10:23:06 +0000 (0:00:05.042) 0:01:04.996 ****** 2025-10-09 10:23:21.663140 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:21.663149 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:21.663159 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:21.663168 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:21.663178 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:21.663187 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:21.663196 | orchestrator | 2025-10-09 10:23:21.663210 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:23:21.663220 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:23:21.663235 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:23:21.663261 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:23:21.663271 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:23:21.663281 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:23:21.663290 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:23:21.663306 | orchestrator | 2025-10-09 10:23:21.663315 | orchestrator | 2025-10-09 10:23:21.663325 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:23:21.663334 | orchestrator | Thursday 09 October 2025 10:23:18 +0000 (0:00:11.686) 0:01:16.682 ****** 2025-10-09 10:23:21.663344 | orchestrator | =============================================================================== 2025-10-09 10:23:21.663354 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 23.38s 2025-10-09 10:23:21.663363 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.95s 2025-10-09 10:23:21.663373 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.51s 2025-10-09 10:23:21.663382 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.04s 2025-10-09 10:23:21.663391 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.19s 2025-10-09 10:23:21.663401 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.12s 2025-10-09 10:23:21.663410 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.80s 2025-10-09 10:23:21.663419 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.37s 2025-10-09 10:23:21.663429 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.19s 2025-10-09 10:23:21.663438 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.97s 2025-10-09 10:23:21.663447 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.83s 2025-10-09 10:23:21.663457 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.73s 2025-10-09 10:23:21.663467 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.70s 2025-10-09 10:23:21.663476 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.46s 2025-10-09 10:23:21.663485 | orchestrator | module-load : Load modules ---------------------------------------------- 1.38s 2025-10-09 10:23:21.663495 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.28s 2025-10-09 10:23:21.663504 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2025-10-09 10:23:21.663514 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-10-09 10:23:21.663523 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:21.663619 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:21.664023 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:21.664957 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:21.664973 | orchestrator | 2025-10-09 10:23:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:24.801558 | orchestrator | 2025-10-09 10:23:24 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:24.801782 | orchestrator | 2025-10-09 10:23:24 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:24.802827 | orchestrator | 2025-10-09 10:23:24 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:24.804103 | orchestrator | 2025-10-09 10:23:24 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:24.806316 | orchestrator | 2025-10-09 10:23:24 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:24.806681 | orchestrator | 2025-10-09 10:23:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:28.075921 | orchestrator | 2025-10-09 10:23:28 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:28.077557 | orchestrator | 2025-10-09 10:23:28 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:28.080577 | orchestrator | 2025-10-09 10:23:28 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state STARTED 2025-10-09 10:23:28.082093 | orchestrator | 2025-10-09 10:23:28 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:28.085406 | orchestrator | 2025-10-09 10:23:28 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:28.085430 | orchestrator | 2025-10-09 10:23:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:31.218375 | orchestrator | 2025-10-09 10:23:31 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:31.218457 | orchestrator | 2025-10-09 10:23:31 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:31.218469 | orchestrator | 2025-10-09 10:23:31 | INFO  | Task 655ea4f9-a9f1-4e86-b238-ef074c6b5d4f is in state SUCCESS 2025-10-09 10:23:31.219933 | orchestrator | 2025-10-09 10:23:31.219999 | orchestrator | 2025-10-09 10:23:31.220012 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-10-09 10:23:31.220023 | orchestrator | 2025-10-09 10:23:31.220033 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-10-09 10:23:31.220043 | orchestrator | Thursday 09 October 2025 10:19:15 +0000 (0:00:00.260) 0:00:00.260 ****** 2025-10-09 10:23:31.220053 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:31.220064 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:31.220073 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:31.220083 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.220093 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.220103 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.220112 | orchestrator | 2025-10-09 10:23:31.220122 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-10-09 10:23:31.220132 | orchestrator | Thursday 09 October 2025 10:19:16 +0000 (0:00:00.960) 0:00:01.221 ****** 2025-10-09 10:23:31.220141 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.220152 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.220161 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.220185 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.220195 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.220205 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.220214 | orchestrator | 2025-10-09 10:23:31.220224 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-10-09 10:23:31.220234 | orchestrator | Thursday 09 October 2025 10:19:17 +0000 (0:00:00.853) 0:00:02.075 ****** 2025-10-09 10:23:31.220266 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.220277 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.220287 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.220296 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.220305 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.220315 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.220324 | orchestrator | 2025-10-09 10:23:31.220334 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-10-09 10:23:31.220343 | orchestrator | Thursday 09 October 2025 10:19:18 +0000 (0:00:01.155) 0:00:03.230 ****** 2025-10-09 10:23:31.220353 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.220362 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.220371 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:31.220381 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:31.220390 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.220399 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:31.220409 | orchestrator | 2025-10-09 10:23:31.220418 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-10-09 10:23:31.220428 | orchestrator | Thursday 09 October 2025 10:19:20 +0000 (0:00:02.598) 0:00:05.829 ****** 2025-10-09 10:23:31.220462 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:31.220472 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:31.220481 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:31.220490 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.220500 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.220510 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.220521 | orchestrator | 2025-10-09 10:23:31.220533 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-10-09 10:23:31.220543 | orchestrator | Thursday 09 October 2025 10:19:22 +0000 (0:00:01.646) 0:00:07.475 ****** 2025-10-09 10:23:31.220555 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:31.220566 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:31.220576 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:31.220587 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.220598 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.220608 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.220619 | orchestrator | 2025-10-09 10:23:31.220629 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-10-09 10:23:31.220640 | orchestrator | Thursday 09 October 2025 10:19:23 +0000 (0:00:01.458) 0:00:08.934 ****** 2025-10-09 10:23:31.220651 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.220662 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.220672 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.220684 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.220695 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.220705 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.220716 | orchestrator | 2025-10-09 10:23:31.220727 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-10-09 10:23:31.220738 | orchestrator | Thursday 09 October 2025 10:19:24 +0000 (0:00:00.812) 0:00:09.746 ****** 2025-10-09 10:23:31.220749 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.220760 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.220770 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.220780 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.220791 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.220814 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.220825 | orchestrator | 2025-10-09 10:23:31.220836 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-10-09 10:23:31.220847 | orchestrator | Thursday 09 October 2025 10:19:26 +0000 (0:00:01.437) 0:00:11.184 ****** 2025-10-09 10:23:31.220858 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:23:31.220869 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:23:31.220878 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.220888 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:23:31.220897 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:23:31.220907 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.220917 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:23:31.220926 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:23:31.220936 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.220945 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:23:31.220966 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:23:31.220976 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.220986 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:23:31.220995 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:23:31.221005 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.221021 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:23:31.221030 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:23:31.221040 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.221050 | orchestrator | 2025-10-09 10:23:31.221060 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-10-09 10:23:31.221069 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:01.068) 0:00:12.253 ****** 2025-10-09 10:23:31.221079 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.221088 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.221098 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.221107 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.221117 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.221126 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.221136 | orchestrator | 2025-10-09 10:23:31.221145 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-10-09 10:23:31.221156 | orchestrator | Thursday 09 October 2025 10:19:28 +0000 (0:00:01.628) 0:00:13.881 ****** 2025-10-09 10:23:31.221166 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:31.221175 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:31.221185 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:31.221195 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.221204 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.221213 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.221223 | orchestrator | 2025-10-09 10:23:31.221233 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-10-09 10:23:31.221257 | orchestrator | Thursday 09 October 2025 10:19:30 +0000 (0:00:01.243) 0:00:15.124 ****** 2025-10-09 10:23:31.221267 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.221277 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:31.221287 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:31.221296 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.221306 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.221315 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:31.221325 | orchestrator | 2025-10-09 10:23:31.221334 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-10-09 10:23:31.221344 | orchestrator | Thursday 09 October 2025 10:19:36 +0000 (0:00:05.997) 0:00:21.122 ****** 2025-10-09 10:23:31.221353 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.221363 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.221372 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.221382 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.221391 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.221401 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.221410 | orchestrator | 2025-10-09 10:23:31.221420 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-10-09 10:23:31.221430 | orchestrator | Thursday 09 October 2025 10:19:38 +0000 (0:00:02.194) 0:00:23.317 ****** 2025-10-09 10:23:31.221439 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.221449 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.221458 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.221468 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.221477 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.221487 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.221496 | orchestrator | 2025-10-09 10:23:31.221506 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-10-09 10:23:31.221516 | orchestrator | Thursday 09 October 2025 10:19:42 +0000 (0:00:03.729) 0:00:27.048 ****** 2025-10-09 10:23:31.221526 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:31.221535 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:31.221545 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:31.221554 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.221570 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.221579 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.221589 | orchestrator | 2025-10-09 10:23:31.221599 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-10-09 10:23:31.221608 | orchestrator | Thursday 09 October 2025 10:19:44 +0000 (0:00:02.375) 0:00:29.424 ****** 2025-10-09 10:23:31.221618 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-10-09 10:23:31.221628 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-10-09 10:23:31.221643 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-10-09 10:23:31.221653 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-10-09 10:23:31.221663 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-10-09 10:23:31.221672 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-10-09 10:23:31.221682 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-10-09 10:23:31.221691 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-10-09 10:23:31.221701 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-10-09 10:23:31.221710 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-10-09 10:23:31.221720 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-10-09 10:23:31.221729 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-10-09 10:23:31.221739 | orchestrator | 2025-10-09 10:23:31.221748 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-10-09 10:23:31.221758 | orchestrator | Thursday 09 October 2025 10:19:48 +0000 (0:00:03.906) 0:00:33.330 ****** 2025-10-09 10:23:31.221768 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:31.221777 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:31.221787 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:31.221797 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.221806 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.221816 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.221825 | orchestrator | 2025-10-09 10:23:31.221873 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-10-09 10:23:31.221884 | orchestrator | 2025-10-09 10:23:31.221916 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-10-09 10:23:31.221927 | orchestrator | Thursday 09 October 2025 10:19:51 +0000 (0:00:02.983) 0:00:36.314 ****** 2025-10-09 10:23:31.221937 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.221946 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.221956 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.221965 | orchestrator | 2025-10-09 10:23:31.221975 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-10-09 10:23:31.221985 | orchestrator | Thursday 09 October 2025 10:19:53 +0000 (0:00:02.192) 0:00:38.507 ****** 2025-10-09 10:23:31.221994 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.222004 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.222013 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.222072 | orchestrator | 2025-10-09 10:23:31.222082 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-10-09 10:23:31.222092 | orchestrator | Thursday 09 October 2025 10:19:56 +0000 (0:00:02.586) 0:00:41.093 ****** 2025-10-09 10:23:31.222101 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.222111 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.222120 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.222130 | orchestrator | 2025-10-09 10:23:31.222139 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-10-09 10:23:31.222149 | orchestrator | Thursday 09 October 2025 10:19:57 +0000 (0:00:01.173) 0:00:42.266 ****** 2025-10-09 10:23:31.222159 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.222168 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.222178 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.222187 | orchestrator | 2025-10-09 10:23:31.222197 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-10-09 10:23:31.222215 | orchestrator | Thursday 09 October 2025 10:19:59 +0000 (0:00:01.723) 0:00:43.990 ****** 2025-10-09 10:23:31.222224 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.222234 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.222269 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.222279 | orchestrator | 2025-10-09 10:23:31.222289 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-10-09 10:23:31.222299 | orchestrator | Thursday 09 October 2025 10:19:59 +0000 (0:00:00.846) 0:00:44.836 ****** 2025-10-09 10:23:31.222308 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.222318 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.222328 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.222337 | orchestrator | 2025-10-09 10:23:31.222347 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-10-09 10:23:31.222356 | orchestrator | Thursday 09 October 2025 10:20:01 +0000 (0:00:01.610) 0:00:46.447 ****** 2025-10-09 10:23:31.222366 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.222376 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.222385 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.222395 | orchestrator | 2025-10-09 10:23:31.222405 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-10-09 10:23:31.222414 | orchestrator | Thursday 09 October 2025 10:20:04 +0000 (0:00:02.670) 0:00:49.117 ****** 2025-10-09 10:23:31.222424 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:23:31.222434 | orchestrator | 2025-10-09 10:23:31.222443 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-10-09 10:23:31.222453 | orchestrator | Thursday 09 October 2025 10:20:05 +0000 (0:00:01.155) 0:00:50.273 ****** 2025-10-09 10:23:31.222462 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.222472 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.222482 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.222491 | orchestrator | 2025-10-09 10:23:31.222501 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-10-09 10:23:31.222510 | orchestrator | Thursday 09 October 2025 10:20:09 +0000 (0:00:03.802) 0:00:54.076 ****** 2025-10-09 10:23:31.222520 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.222530 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.222539 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.222549 | orchestrator | 2025-10-09 10:23:31.222558 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-10-09 10:23:31.222568 | orchestrator | Thursday 09 October 2025 10:20:09 +0000 (0:00:00.815) 0:00:54.892 ****** 2025-10-09 10:23:31.222578 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.222587 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.222597 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.222606 | orchestrator | 2025-10-09 10:23:31.222616 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-10-09 10:23:31.222631 | orchestrator | Thursday 09 October 2025 10:20:11 +0000 (0:00:01.983) 0:00:56.875 ****** 2025-10-09 10:23:31.222641 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.222650 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.222660 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.222669 | orchestrator | 2025-10-09 10:23:31.222679 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-10-09 10:23:31.222689 | orchestrator | Thursday 09 October 2025 10:20:14 +0000 (0:00:02.355) 0:00:59.231 ****** 2025-10-09 10:23:31.222698 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.222708 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.222717 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.222727 | orchestrator | 2025-10-09 10:23:31.222737 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-10-09 10:23:31.222746 | orchestrator | Thursday 09 October 2025 10:20:15 +0000 (0:00:00.819) 0:01:00.050 ****** 2025-10-09 10:23:31.222762 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.222771 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.222781 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.222791 | orchestrator | 2025-10-09 10:23:31.222800 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-10-09 10:23:31.222810 | orchestrator | Thursday 09 October 2025 10:20:15 +0000 (0:00:00.859) 0:01:00.910 ****** 2025-10-09 10:23:31.222820 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.222829 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.222839 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.222849 | orchestrator | 2025-10-09 10:23:31.222866 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-10-09 10:23:31.222876 | orchestrator | Thursday 09 October 2025 10:20:20 +0000 (0:00:04.529) 0:01:05.439 ****** 2025-10-09 10:23:31.222886 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-09 10:23:31.222896 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-09 10:23:31.222906 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-09 10:23:31.222916 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-09 10:23:31.222925 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-09 10:23:31.222935 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-09 10:23:31.222945 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-09 10:23:31.222954 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-09 10:23:31.222964 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-09 10:23:31.222973 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-09 10:23:31.222983 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-09 10:23:31.222992 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-09 10:23:31.223002 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-10-09 10:23:31.223011 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-10-09 10:23:31.223021 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.223030 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.223040 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.223050 | orchestrator | 2025-10-09 10:23:31.223059 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-10-09 10:23:31.223069 | orchestrator | Thursday 09 October 2025 10:21:14 +0000 (0:00:54.207) 0:01:59.646 ****** 2025-10-09 10:23:31.223078 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.223088 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.223097 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.223107 | orchestrator | 2025-10-09 10:23:31.223116 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-10-09 10:23:31.223132 | orchestrator | Thursday 09 October 2025 10:21:14 +0000 (0:00:00.282) 0:01:59.929 ****** 2025-10-09 10:23:31.223141 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.223151 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.223160 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.223170 | orchestrator | 2025-10-09 10:23:31.223180 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-10-09 10:23:31.223189 | orchestrator | Thursday 09 October 2025 10:21:16 +0000 (0:00:01.040) 0:02:00.970 ****** 2025-10-09 10:23:31.223198 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.223208 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.223217 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.223227 | orchestrator | 2025-10-09 10:23:31.223282 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-10-09 10:23:31.223295 | orchestrator | Thursday 09 October 2025 10:21:17 +0000 (0:00:01.422) 0:02:02.392 ****** 2025-10-09 10:23:31.223305 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.223315 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.223324 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.223334 | orchestrator | 2025-10-09 10:23:31.223343 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-10-09 10:23:31.223353 | orchestrator | Thursday 09 October 2025 10:21:41 +0000 (0:00:24.402) 0:02:26.794 ****** 2025-10-09 10:23:31.223362 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.223372 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.223381 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.223391 | orchestrator | 2025-10-09 10:23:31.223401 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-10-09 10:23:31.223410 | orchestrator | Thursday 09 October 2025 10:21:42 +0000 (0:00:00.686) 0:02:27.481 ****** 2025-10-09 10:23:31.223420 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.223430 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.223439 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.223448 | orchestrator | 2025-10-09 10:23:31.223458 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-10-09 10:23:31.223468 | orchestrator | Thursday 09 October 2025 10:21:43 +0000 (0:00:00.702) 0:02:28.183 ****** 2025-10-09 10:23:31.223483 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.223493 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.223502 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.223512 | orchestrator | 2025-10-09 10:23:31.223522 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-10-09 10:23:31.223531 | orchestrator | Thursday 09 October 2025 10:21:43 +0000 (0:00:00.645) 0:02:28.829 ****** 2025-10-09 10:23:31.223541 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.223550 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.223560 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.223569 | orchestrator | 2025-10-09 10:23:31.223579 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-10-09 10:23:31.223589 | orchestrator | Thursday 09 October 2025 10:21:44 +0000 (0:00:00.805) 0:02:29.635 ****** 2025-10-09 10:23:31.223598 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.223608 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.223617 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.223627 | orchestrator | 2025-10-09 10:23:31.223636 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-10-09 10:23:31.223646 | orchestrator | Thursday 09 October 2025 10:21:44 +0000 (0:00:00.264) 0:02:29.900 ****** 2025-10-09 10:23:31.223655 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.223665 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.223675 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.223684 | orchestrator | 2025-10-09 10:23:31.223694 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-10-09 10:23:31.223704 | orchestrator | Thursday 09 October 2025 10:21:45 +0000 (0:00:00.624) 0:02:30.525 ****** 2025-10-09 10:23:31.223719 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.223729 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.223739 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.223749 | orchestrator | 2025-10-09 10:23:31.223758 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-10-09 10:23:31.223768 | orchestrator | Thursday 09 October 2025 10:21:46 +0000 (0:00:00.651) 0:02:31.176 ****** 2025-10-09 10:23:31.223777 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.223787 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.223797 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.223806 | orchestrator | 2025-10-09 10:23:31.223816 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-10-09 10:23:31.223825 | orchestrator | Thursday 09 October 2025 10:21:47 +0000 (0:00:01.135) 0:02:32.312 ****** 2025-10-09 10:23:31.223833 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:31.223841 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:31.223848 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:31.223856 | orchestrator | 2025-10-09 10:23:31.223864 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-10-09 10:23:31.223872 | orchestrator | Thursday 09 October 2025 10:21:48 +0000 (0:00:00.856) 0:02:33.168 ****** 2025-10-09 10:23:31.223880 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.223888 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.223896 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.223903 | orchestrator | 2025-10-09 10:23:31.223911 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-10-09 10:23:31.223919 | orchestrator | Thursday 09 October 2025 10:21:48 +0000 (0:00:00.272) 0:02:33.441 ****** 2025-10-09 10:23:31.223927 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.223935 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.223943 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.223950 | orchestrator | 2025-10-09 10:23:31.223958 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-10-09 10:23:31.223976 | orchestrator | Thursday 09 October 2025 10:21:48 +0000 (0:00:00.271) 0:02:33.712 ****** 2025-10-09 10:23:31.223984 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.223992 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.224000 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.224008 | orchestrator | 2025-10-09 10:23:31.224016 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-10-09 10:23:31.224024 | orchestrator | Thursday 09 October 2025 10:21:49 +0000 (0:00:00.766) 0:02:34.479 ****** 2025-10-09 10:23:31.224032 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.224039 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.224047 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.224055 | orchestrator | 2025-10-09 10:23:31.224063 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-10-09 10:23:31.224071 | orchestrator | Thursday 09 October 2025 10:21:50 +0000 (0:00:00.724) 0:02:35.203 ****** 2025-10-09 10:23:31.224079 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-09 10:23:31.224091 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-09 10:23:31.224099 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-09 10:23:31.224107 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-09 10:23:31.224115 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-09 10:23:31.224122 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-09 10:23:31.224130 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-09 10:23:31.224149 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-09 10:23:31.224156 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-09 10:23:31.224164 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-09 10:23:31.224172 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-10-09 10:23:31.224184 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-09 10:23:31.224192 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-09 10:23:31.224200 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-10-09 10:23:31.224208 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-09 10:23:31.224216 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-09 10:23:31.224224 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-09 10:23:31.224232 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-09 10:23:31.224239 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-09 10:23:31.224260 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-09 10:23:31.224268 | orchestrator | 2025-10-09 10:23:31.224276 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-10-09 10:23:31.224284 | orchestrator | 2025-10-09 10:23:31.224292 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-10-09 10:23:31.224300 | orchestrator | Thursday 09 October 2025 10:21:53 +0000 (0:00:03.388) 0:02:38.591 ****** 2025-10-09 10:23:31.224307 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:31.224315 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:31.224323 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:31.224331 | orchestrator | 2025-10-09 10:23:31.224339 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-10-09 10:23:31.224346 | orchestrator | Thursday 09 October 2025 10:21:54 +0000 (0:00:00.433) 0:02:39.025 ****** 2025-10-09 10:23:31.224354 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:31.224362 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:31.224370 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:31.224378 | orchestrator | 2025-10-09 10:23:31.224386 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-10-09 10:23:31.224394 | orchestrator | Thursday 09 October 2025 10:21:54 +0000 (0:00:00.605) 0:02:39.631 ****** 2025-10-09 10:23:31.224401 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:31.224409 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:31.224417 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:31.224425 | orchestrator | 2025-10-09 10:23:31.224432 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-10-09 10:23:31.224440 | orchestrator | Thursday 09 October 2025 10:21:54 +0000 (0:00:00.318) 0:02:39.949 ****** 2025-10-09 10:23:31.224448 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:23:31.224456 | orchestrator | 2025-10-09 10:23:31.224464 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-10-09 10:23:31.224472 | orchestrator | Thursday 09 October 2025 10:21:55 +0000 (0:00:00.631) 0:02:40.581 ****** 2025-10-09 10:23:31.224480 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.224488 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.224496 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.224504 | orchestrator | 2025-10-09 10:23:31.224511 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-10-09 10:23:31.224519 | orchestrator | Thursday 09 October 2025 10:21:55 +0000 (0:00:00.284) 0:02:40.865 ****** 2025-10-09 10:23:31.224532 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.224540 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.224548 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.224556 | orchestrator | 2025-10-09 10:23:31.224564 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-10-09 10:23:31.224572 | orchestrator | Thursday 09 October 2025 10:21:56 +0000 (0:00:00.281) 0:02:41.147 ****** 2025-10-09 10:23:31.224580 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.224587 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.224595 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.224603 | orchestrator | 2025-10-09 10:23:31.224611 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-10-09 10:23:31.224619 | orchestrator | Thursday 09 October 2025 10:21:56 +0000 (0:00:00.286) 0:02:41.433 ****** 2025-10-09 10:23:31.224627 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:31.224635 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:31.224642 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:31.224650 | orchestrator | 2025-10-09 10:23:31.224662 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-10-09 10:23:31.224670 | orchestrator | Thursday 09 October 2025 10:21:57 +0000 (0:00:00.753) 0:02:42.186 ****** 2025-10-09 10:23:31.224678 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:31.224686 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:31.224694 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:31.224702 | orchestrator | 2025-10-09 10:23:31.224709 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-10-09 10:23:31.224717 | orchestrator | Thursday 09 October 2025 10:21:58 +0000 (0:00:01.119) 0:02:43.306 ****** 2025-10-09 10:23:31.224725 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:31.224733 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:31.224741 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:31.224749 | orchestrator | 2025-10-09 10:23:31.224756 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-10-09 10:23:31.224764 | orchestrator | Thursday 09 October 2025 10:21:59 +0000 (0:00:01.302) 0:02:44.608 ****** 2025-10-09 10:23:31.224772 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:31.224780 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:31.224788 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:31.224795 | orchestrator | 2025-10-09 10:23:31.224803 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-10-09 10:23:31.224811 | orchestrator | 2025-10-09 10:23:31.224823 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-10-09 10:23:31.224831 | orchestrator | Thursday 09 October 2025 10:22:12 +0000 (0:00:12.840) 0:02:57.449 ****** 2025-10-09 10:23:31.224839 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:31.224847 | orchestrator | 2025-10-09 10:23:31.224855 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-10-09 10:23:31.224862 | orchestrator | Thursday 09 October 2025 10:22:13 +0000 (0:00:01.222) 0:02:58.671 ****** 2025-10-09 10:23:31.224870 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.224878 | orchestrator | 2025-10-09 10:23:31.224886 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-09 10:23:31.224894 | orchestrator | Thursday 09 October 2025 10:22:14 +0000 (0:00:00.404) 0:02:59.075 ****** 2025-10-09 10:23:31.224902 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-09 10:23:31.224910 | orchestrator | 2025-10-09 10:23:31.224918 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-09 10:23:31.224925 | orchestrator | Thursday 09 October 2025 10:22:14 +0000 (0:00:00.597) 0:02:59.673 ****** 2025-10-09 10:23:31.224933 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.224941 | orchestrator | 2025-10-09 10:23:31.224949 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-10-09 10:23:31.224962 | orchestrator | Thursday 09 October 2025 10:22:15 +0000 (0:00:00.813) 0:03:00.487 ****** 2025-10-09 10:23:31.224970 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.224978 | orchestrator | 2025-10-09 10:23:31.224985 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-10-09 10:23:31.224993 | orchestrator | Thursday 09 October 2025 10:22:16 +0000 (0:00:00.574) 0:03:01.061 ****** 2025-10-09 10:23:31.225001 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:23:31.225009 | orchestrator | 2025-10-09 10:23:31.225017 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-10-09 10:23:31.225025 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:01.703) 0:03:02.765 ****** 2025-10-09 10:23:31.225032 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:23:31.225040 | orchestrator | 2025-10-09 10:23:31.225048 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-10-09 10:23:31.225056 | orchestrator | Thursday 09 October 2025 10:22:18 +0000 (0:00:00.896) 0:03:03.661 ****** 2025-10-09 10:23:31.225064 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.225072 | orchestrator | 2025-10-09 10:23:31.225080 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-10-09 10:23:31.225087 | orchestrator | Thursday 09 October 2025 10:22:19 +0000 (0:00:00.923) 0:03:04.585 ****** 2025-10-09 10:23:31.225095 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.225103 | orchestrator | 2025-10-09 10:23:31.225111 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-10-09 10:23:31.225119 | orchestrator | 2025-10-09 10:23:31.225127 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-10-09 10:23:31.225135 | orchestrator | Thursday 09 October 2025 10:22:20 +0000 (0:00:00.611) 0:03:05.196 ****** 2025-10-09 10:23:31.225142 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:31.225150 | orchestrator | 2025-10-09 10:23:31.225158 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-10-09 10:23:31.225166 | orchestrator | Thursday 09 October 2025 10:22:20 +0000 (0:00:00.131) 0:03:05.328 ****** 2025-10-09 10:23:31.225177 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 10:23:31.225190 | orchestrator | 2025-10-09 10:23:31.225203 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-10-09 10:23:31.225217 | orchestrator | Thursday 09 October 2025 10:22:20 +0000 (0:00:00.234) 0:03:05.563 ****** 2025-10-09 10:23:31.225229 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:31.225276 | orchestrator | 2025-10-09 10:23:31.225285 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-10-09 10:23:31.225293 | orchestrator | Thursday 09 October 2025 10:22:21 +0000 (0:00:00.953) 0:03:06.517 ****** 2025-10-09 10:23:31.225301 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:31.225309 | orchestrator | 2025-10-09 10:23:31.225317 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-10-09 10:23:31.225325 | orchestrator | Thursday 09 October 2025 10:22:23 +0000 (0:00:01.868) 0:03:08.385 ****** 2025-10-09 10:23:31.225332 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.225340 | orchestrator | 2025-10-09 10:23:31.225348 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-10-09 10:23:31.225356 | orchestrator | Thursday 09 October 2025 10:22:24 +0000 (0:00:00.808) 0:03:09.194 ****** 2025-10-09 10:23:31.225364 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:31.225372 | orchestrator | 2025-10-09 10:23:31.225384 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-10-09 10:23:31.225392 | orchestrator | Thursday 09 October 2025 10:22:24 +0000 (0:00:00.679) 0:03:09.873 ****** 2025-10-09 10:23:31.225400 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.225408 | orchestrator | 2025-10-09 10:23:31.225416 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-10-09 10:23:31.225424 | orchestrator | Thursday 09 October 2025 10:22:34 +0000 (0:00:09.354) 0:03:19.227 ****** 2025-10-09 10:23:31.225437 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.225445 | orchestrator | 2025-10-09 10:23:31.225453 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-10-09 10:23:31.225461 | orchestrator | Thursday 09 October 2025 10:22:50 +0000 (0:00:16.423) 0:03:35.651 ****** 2025-10-09 10:23:31.225469 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:31.225476 | orchestrator | 2025-10-09 10:23:31.225484 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-10-09 10:23:31.225492 | orchestrator | 2025-10-09 10:23:31.225500 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-10-09 10:23:31.225508 | orchestrator | Thursday 09 October 2025 10:22:51 +0000 (0:00:00.691) 0:03:36.343 ****** 2025-10-09 10:23:31.225516 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.225523 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.225531 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.225539 | orchestrator | 2025-10-09 10:23:31.225551 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-10-09 10:23:31.225560 | orchestrator | Thursday 09 October 2025 10:22:51 +0000 (0:00:00.376) 0:03:36.720 ****** 2025-10-09 10:23:31.225567 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225575 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.225583 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.225591 | orchestrator | 2025-10-09 10:23:31.225598 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-10-09 10:23:31.225606 | orchestrator | Thursday 09 October 2025 10:22:52 +0000 (0:00:00.469) 0:03:37.189 ****** 2025-10-09 10:23:31.225614 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:23:31.225622 | orchestrator | 2025-10-09 10:23:31.225630 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-10-09 10:23:31.225638 | orchestrator | Thursday 09 October 2025 10:22:53 +0000 (0:00:00.870) 0:03:38.059 ****** 2025-10-09 10:23:31.225645 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225653 | orchestrator | 2025-10-09 10:23:31.225661 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-10-09 10:23:31.225669 | orchestrator | Thursday 09 October 2025 10:22:53 +0000 (0:00:00.230) 0:03:38.290 ****** 2025-10-09 10:23:31.225677 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225685 | orchestrator | 2025-10-09 10:23:31.225693 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-10-09 10:23:31.225700 | orchestrator | Thursday 09 October 2025 10:22:53 +0000 (0:00:00.232) 0:03:38.523 ****** 2025-10-09 10:23:31.225708 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225716 | orchestrator | 2025-10-09 10:23:31.225724 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-10-09 10:23:31.225731 | orchestrator | Thursday 09 October 2025 10:22:53 +0000 (0:00:00.296) 0:03:38.820 ****** 2025-10-09 10:23:31.225739 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225747 | orchestrator | 2025-10-09 10:23:31.225755 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-10-09 10:23:31.225763 | orchestrator | Thursday 09 October 2025 10:22:54 +0000 (0:00:00.332) 0:03:39.153 ****** 2025-10-09 10:23:31.225769 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225776 | orchestrator | 2025-10-09 10:23:31.225783 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-10-09 10:23:31.225789 | orchestrator | Thursday 09 October 2025 10:22:54 +0000 (0:00:00.285) 0:03:39.438 ****** 2025-10-09 10:23:31.225796 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225803 | orchestrator | 2025-10-09 10:23:31.225809 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-10-09 10:23:31.225816 | orchestrator | Thursday 09 October 2025 10:22:54 +0000 (0:00:00.264) 0:03:39.703 ****** 2025-10-09 10:23:31.225822 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225829 | orchestrator | 2025-10-09 10:23:31.225836 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-10-09 10:23:31.225850 | orchestrator | Thursday 09 October 2025 10:22:55 +0000 (0:00:00.329) 0:03:40.033 ****** 2025-10-09 10:23:31.225857 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225864 | orchestrator | 2025-10-09 10:23:31.225870 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-10-09 10:23:31.225877 | orchestrator | Thursday 09 October 2025 10:22:55 +0000 (0:00:00.283) 0:03:40.316 ****** 2025-10-09 10:23:31.225884 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225890 | orchestrator | 2025-10-09 10:23:31.225897 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-10-09 10:23:31.225903 | orchestrator | Thursday 09 October 2025 10:22:56 +0000 (0:00:00.809) 0:03:41.126 ****** 2025-10-09 10:23:31.225910 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-10-09 10:23:31.225917 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-10-09 10:23:31.225923 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225930 | orchestrator | 2025-10-09 10:23:31.225936 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-10-09 10:23:31.225943 | orchestrator | Thursday 09 October 2025 10:22:56 +0000 (0:00:00.418) 0:03:41.544 ****** 2025-10-09 10:23:31.225949 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225956 | orchestrator | 2025-10-09 10:23:31.225963 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-10-09 10:23:31.225969 | orchestrator | Thursday 09 October 2025 10:22:56 +0000 (0:00:00.229) 0:03:41.774 ****** 2025-10-09 10:23:31.225976 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.225982 | orchestrator | 2025-10-09 10:23:31.225992 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-10-09 10:23:31.225999 | orchestrator | Thursday 09 October 2025 10:22:57 +0000 (0:00:00.241) 0:03:42.016 ****** 2025-10-09 10:23:31.226006 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226013 | orchestrator | 2025-10-09 10:23:31.226039 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-10-09 10:23:31.226045 | orchestrator | Thursday 09 October 2025 10:22:57 +0000 (0:00:00.308) 0:03:42.324 ****** 2025-10-09 10:23:31.226052 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226059 | orchestrator | 2025-10-09 10:23:31.226065 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-10-09 10:23:31.226072 | orchestrator | Thursday 09 October 2025 10:22:57 +0000 (0:00:00.249) 0:03:42.573 ****** 2025-10-09 10:23:31.226079 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226085 | orchestrator | 2025-10-09 10:23:31.226092 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-10-09 10:23:31.226099 | orchestrator | Thursday 09 October 2025 10:22:57 +0000 (0:00:00.232) 0:03:42.806 ****** 2025-10-09 10:23:31.226105 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226112 | orchestrator | 2025-10-09 10:23:31.226118 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-10-09 10:23:31.226125 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:00.302) 0:03:43.109 ****** 2025-10-09 10:23:31.226132 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226139 | orchestrator | 2025-10-09 10:23:31.226149 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-10-09 10:23:31.226155 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:00.214) 0:03:43.323 ****** 2025-10-09 10:23:31.226162 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226169 | orchestrator | 2025-10-09 10:23:31.226175 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-10-09 10:23:31.226182 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:00.247) 0:03:43.571 ****** 2025-10-09 10:23:31.226189 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226195 | orchestrator | 2025-10-09 10:23:31.226202 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-10-09 10:23:31.226208 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:00.213) 0:03:43.785 ****** 2025-10-09 10:23:31.226221 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226227 | orchestrator | 2025-10-09 10:23:31.226234 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-10-09 10:23:31.226241 | orchestrator | Thursday 09 October 2025 10:22:59 +0000 (0:00:00.269) 0:03:44.054 ****** 2025-10-09 10:23:31.226259 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226266 | orchestrator | 2025-10-09 10:23:31.226273 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-10-09 10:23:31.226279 | orchestrator | Thursday 09 October 2025 10:22:59 +0000 (0:00:00.764) 0:03:44.818 ****** 2025-10-09 10:23:31.226286 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-10-09 10:23:31.226293 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-10-09 10:23:31.226300 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-10-09 10:23:31.226306 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-10-09 10:23:31.226313 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226320 | orchestrator | 2025-10-09 10:23:31.226326 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-10-09 10:23:31.226333 | orchestrator | Thursday 09 October 2025 10:23:00 +0000 (0:00:00.627) 0:03:45.445 ****** 2025-10-09 10:23:31.226340 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226346 | orchestrator | 2025-10-09 10:23:31.226353 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-10-09 10:23:31.226360 | orchestrator | Thursday 09 October 2025 10:23:00 +0000 (0:00:00.220) 0:03:45.666 ****** 2025-10-09 10:23:31.226366 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226373 | orchestrator | 2025-10-09 10:23:31.226379 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-10-09 10:23:31.226386 | orchestrator | Thursday 09 October 2025 10:23:00 +0000 (0:00:00.213) 0:03:45.880 ****** 2025-10-09 10:23:31.226393 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226399 | orchestrator | 2025-10-09 10:23:31.226406 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-10-09 10:23:31.226412 | orchestrator | Thursday 09 October 2025 10:23:01 +0000 (0:00:00.263) 0:03:46.144 ****** 2025-10-09 10:23:31.226419 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226425 | orchestrator | 2025-10-09 10:23:31.226432 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-10-09 10:23:31.226439 | orchestrator | Thursday 09 October 2025 10:23:01 +0000 (0:00:00.238) 0:03:46.382 ****** 2025-10-09 10:23:31.226445 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-10-09 10:23:31.226452 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-10-09 10:23:31.226459 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226465 | orchestrator | 2025-10-09 10:23:31.226472 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-10-09 10:23:31.226479 | orchestrator | Thursday 09 October 2025 10:23:01 +0000 (0:00:00.373) 0:03:46.756 ****** 2025-10-09 10:23:31.226485 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.226492 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.226498 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.226505 | orchestrator | 2025-10-09 10:23:31.226512 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-10-09 10:23:31.226518 | orchestrator | Thursday 09 October 2025 10:23:02 +0000 (0:00:00.558) 0:03:47.314 ****** 2025-10-09 10:23:31.226525 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.226532 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.226538 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.226545 | orchestrator | 2025-10-09 10:23:31.226551 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-10-09 10:23:31.226558 | orchestrator | 2025-10-09 10:23:31.226570 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-10-09 10:23:31.226577 | orchestrator | Thursday 09 October 2025 10:23:03 +0000 (0:00:01.258) 0:03:48.572 ****** 2025-10-09 10:23:31.226584 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:31.226591 | orchestrator | 2025-10-09 10:23:31.226597 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-10-09 10:23:31.226604 | orchestrator | Thursday 09 October 2025 10:23:03 +0000 (0:00:00.172) 0:03:48.745 ****** 2025-10-09 10:23:31.226611 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 10:23:31.226617 | orchestrator | 2025-10-09 10:23:31.226624 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-10-09 10:23:31.226630 | orchestrator | Thursday 09 October 2025 10:23:04 +0000 (0:00:00.311) 0:03:49.056 ****** 2025-10-09 10:23:31.226637 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:31.226644 | orchestrator | 2025-10-09 10:23:31.226650 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-10-09 10:23:31.226657 | orchestrator | 2025-10-09 10:23:31.226664 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-10-09 10:23:31.226670 | orchestrator | Thursday 09 October 2025 10:23:11 +0000 (0:00:07.686) 0:03:56.743 ****** 2025-10-09 10:23:31.226677 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:31.226684 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:31.226690 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:31.226700 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:31.226707 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:31.226714 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:31.226721 | orchestrator | 2025-10-09 10:23:31.226727 | orchestrator | TASK [Manage labels] *********************************************************** 2025-10-09 10:23:31.226734 | orchestrator | Thursday 09 October 2025 10:23:13 +0000 (0:00:01.359) 0:03:58.102 ****** 2025-10-09 10:23:31.226741 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-09 10:23:31.227216 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-09 10:23:31.227229 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-09 10:23:31.227237 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-09 10:23:31.227260 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-09 10:23:31.227271 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-09 10:23:31.227278 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-09 10:23:31.227285 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-09 10:23:31.227292 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-09 10:23:31.227298 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-09 10:23:31.227305 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-09 10:23:31.227312 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-09 10:23:31.227318 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-09 10:23:31.227325 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-09 10:23:31.227331 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-09 10:23:31.227338 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-09 10:23:31.227344 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-09 10:23:31.227351 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-09 10:23:31.227366 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-09 10:23:31.227373 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-09 10:23:31.227379 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-09 10:23:31.227386 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-09 10:23:31.227392 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-09 10:23:31.227399 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-09 10:23:31.227405 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-09 10:23:31.227412 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-09 10:23:31.227418 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-09 10:23:31.227425 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-09 10:23:31.227431 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-09 10:23:31.227438 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-09 10:23:31.227444 | orchestrator | 2025-10-09 10:23:31.227451 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-10-09 10:23:31.227458 | orchestrator | Thursday 09 October 2025 10:23:29 +0000 (0:00:15.970) 0:04:14.073 ****** 2025-10-09 10:23:31.227464 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.227471 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.227478 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.227484 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.227491 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.227498 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.227504 | orchestrator | 2025-10-09 10:23:31.227511 | orchestrator | TASK [Manage taints] *********************************************************** 2025-10-09 10:23:31.227517 | orchestrator | Thursday 09 October 2025 10:23:29 +0000 (0:00:00.829) 0:04:14.902 ****** 2025-10-09 10:23:31.227524 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:31.227530 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:31.227537 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:31.227544 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:31.227550 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:31.227557 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:31.227563 | orchestrator | 2025-10-09 10:23:31.227570 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:23:31.227577 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:23:31.227591 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-10-09 10:23:31.227599 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-09 10:23:31.227606 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-09 10:23:31.227612 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-09 10:23:31.227619 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-09 10:23:31.227629 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-09 10:23:31.227640 | orchestrator | 2025-10-09 10:23:31.227647 | orchestrator | 2025-10-09 10:23:31.227654 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:23:31.227661 | orchestrator | Thursday 09 October 2025 10:23:30 +0000 (0:00:00.634) 0:04:15.536 ****** 2025-10-09 10:23:31.227667 | orchestrator | =============================================================================== 2025-10-09 10:23:31.227674 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.21s 2025-10-09 10:23:31.227681 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.40s 2025-10-09 10:23:31.227687 | orchestrator | kubectl : Install required packages ------------------------------------ 16.42s 2025-10-09 10:23:31.227694 | orchestrator | Manage labels ---------------------------------------------------------- 15.97s 2025-10-09 10:23:31.227700 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.84s 2025-10-09 10:23:31.227707 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.35s 2025-10-09 10:23:31.227713 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.69s 2025-10-09 10:23:31.227720 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.00s 2025-10-09 10:23:31.227726 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 4.53s 2025-10-09 10:23:31.227733 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.91s 2025-10-09 10:23:31.227739 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.80s 2025-10-09 10:23:31.227746 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.74s 2025-10-09 10:23:31.227753 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.39s 2025-10-09 10:23:31.227759 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.98s 2025-10-09 10:23:31.227766 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.67s 2025-10-09 10:23:31.227772 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.60s 2025-10-09 10:23:31.227779 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.59s 2025-10-09 10:23:31.227785 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 2.37s 2025-10-09 10:23:31.227792 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.36s 2025-10-09 10:23:31.227798 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.19s 2025-10-09 10:23:31.227805 | orchestrator | 2025-10-09 10:23:31 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:31.227812 | orchestrator | 2025-10-09 10:23:31 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:31.227818 | orchestrator | 2025-10-09 10:23:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:34.307550 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:34.310094 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:34.314960 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:34.320055 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task 5a2455b5-49c3-465b-b903-2407c615c718 is in state STARTED 2025-10-09 10:23:34.322814 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:34.327226 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task 11cca815-78ee-43a4-b060-51827c0a922d is in state STARTED 2025-10-09 10:23:34.327327 | orchestrator | 2025-10-09 10:23:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:37.383646 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:37.384853 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:37.385682 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:37.388207 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task 5a2455b5-49c3-465b-b903-2407c615c718 is in state STARTED 2025-10-09 10:23:37.392410 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:37.394163 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task 11cca815-78ee-43a4-b060-51827c0a922d is in state STARTED 2025-10-09 10:23:37.394193 | orchestrator | 2025-10-09 10:23:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:40.472449 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:40.472542 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:40.472555 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:40.472566 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task 5a2455b5-49c3-465b-b903-2407c615c718 is in state STARTED 2025-10-09 10:23:40.472575 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:40.472585 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task 11cca815-78ee-43a4-b060-51827c0a922d is in state STARTED 2025-10-09 10:23:40.472595 | orchestrator | 2025-10-09 10:23:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:43.640836 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:43.642962 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:43.643186 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:43.644377 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task 5a2455b5-49c3-465b-b903-2407c615c718 is in state SUCCESS 2025-10-09 10:23:43.645234 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:43.646441 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task 11cca815-78ee-43a4-b060-51827c0a922d is in state STARTED 2025-10-09 10:23:43.646459 | orchestrator | 2025-10-09 10:23:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:46.712183 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:46.712746 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:46.714881 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:46.717300 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:46.718575 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task 11cca815-78ee-43a4-b060-51827c0a922d is in state SUCCESS 2025-10-09 10:23:46.719007 | orchestrator | 2025-10-09 10:23:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:49.771995 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:49.774013 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:49.775439 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:49.776792 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:49.777024 | orchestrator | 2025-10-09 10:23:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:52.848608 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:52.848708 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:52.848724 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:52.848737 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:52.848748 | orchestrator | 2025-10-09 10:23:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:55.868805 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:55.870330 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:55.871924 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:55.873373 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:55.873394 | orchestrator | 2025-10-09 10:23:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:58.918376 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:23:58.919504 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:23:58.920535 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:23:58.923883 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:23:58.923908 | orchestrator | 2025-10-09 10:23:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:01.956887 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:01.958640 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:01.959417 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:01.961467 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:01.961488 | orchestrator | 2025-10-09 10:24:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:04.996623 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:04.996848 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:04.997709 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:05.001155 | orchestrator | 2025-10-09 10:24:05 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:05.001179 | orchestrator | 2025-10-09 10:24:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:08.030351 | orchestrator | 2025-10-09 10:24:08 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:08.030752 | orchestrator | 2025-10-09 10:24:08 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:08.031698 | orchestrator | 2025-10-09 10:24:08 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:08.032363 | orchestrator | 2025-10-09 10:24:08 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:08.032603 | orchestrator | 2025-10-09 10:24:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:11.094459 | orchestrator | 2025-10-09 10:24:11 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:11.094618 | orchestrator | 2025-10-09 10:24:11 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:11.095265 | orchestrator | 2025-10-09 10:24:11 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:11.097578 | orchestrator | 2025-10-09 10:24:11 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:11.097600 | orchestrator | 2025-10-09 10:24:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:14.174238 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:14.175500 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:14.177522 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:14.178300 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:14.178335 | orchestrator | 2025-10-09 10:24:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:17.220595 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:17.222608 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:17.225333 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:17.227169 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:17.227860 | orchestrator | 2025-10-09 10:24:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:20.266359 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:20.266736 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:20.268762 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:20.271093 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:20.271120 | orchestrator | 2025-10-09 10:24:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:23.327341 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:23.328100 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:23.329312 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:23.330500 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:23.331142 | orchestrator | 2025-10-09 10:24:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:26.364623 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:26.365579 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:26.366816 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:26.367877 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:26.368164 | orchestrator | 2025-10-09 10:24:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:29.402709 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:29.403106 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:29.404066 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:29.404740 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:29.404867 | orchestrator | 2025-10-09 10:24:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:32.439733 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:32.441134 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:32.442994 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:32.444669 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:32.444857 | orchestrator | 2025-10-09 10:24:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:35.475301 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:35.476728 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:35.480932 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:35.484451 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:35.485038 | orchestrator | 2025-10-09 10:24:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:38.524793 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:38.524891 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:38.526652 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:38.527851 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:38.527877 | orchestrator | 2025-10-09 10:24:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:41.571411 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:41.572477 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:41.573686 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:41.575071 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:41.575506 | orchestrator | 2025-10-09 10:24:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:44.621578 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:44.623360 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:44.626529 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:44.631112 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:44.631804 | orchestrator | 2025-10-09 10:24:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:47.676005 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:47.676154 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:47.676909 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:47.677542 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:47.677569 | orchestrator | 2025-10-09 10:24:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:50.716877 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:50.718662 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:50.719159 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:50.720186 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:50.720218 | orchestrator | 2025-10-09 10:24:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:53.770400 | orchestrator | 2025-10-09 10:24:53 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:53.771970 | orchestrator | 2025-10-09 10:24:53 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:53.776212 | orchestrator | 2025-10-09 10:24:53 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:53.778232 | orchestrator | 2025-10-09 10:24:53 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:53.778278 | orchestrator | 2025-10-09 10:24:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:56.823662 | orchestrator | 2025-10-09 10:24:56 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:56.823762 | orchestrator | 2025-10-09 10:24:56 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:56.823831 | orchestrator | 2025-10-09 10:24:56 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:56.828640 | orchestrator | 2025-10-09 10:24:56 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:56.828716 | orchestrator | 2025-10-09 10:24:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:59.859025 | orchestrator | 2025-10-09 10:24:59 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state STARTED 2025-10-09 10:24:59.861529 | orchestrator | 2025-10-09 10:24:59 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:24:59.862676 | orchestrator | 2025-10-09 10:24:59 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:24:59.863734 | orchestrator | 2025-10-09 10:24:59 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:24:59.863938 | orchestrator | 2025-10-09 10:24:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:02.899589 | orchestrator | 2025-10-09 10:25:02 | INFO  | Task aace2bdf-3440-4469-9063-947b98a68bf0 is in state SUCCESS 2025-10-09 10:25:02.900447 | orchestrator | 2025-10-09 10:25:02.900496 | orchestrator | 2025-10-09 10:25:02.900509 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-10-09 10:25:02.900521 | orchestrator | 2025-10-09 10:25:02.900551 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-09 10:25:02.900563 | orchestrator | Thursday 09 October 2025 10:23:37 +0000 (0:00:00.343) 0:00:00.343 ****** 2025-10-09 10:25:02.900575 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-09 10:25:02.900586 | orchestrator | 2025-10-09 10:25:02.900597 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-09 10:25:02.900608 | orchestrator | Thursday 09 October 2025 10:23:38 +0000 (0:00:01.027) 0:00:01.370 ****** 2025-10-09 10:25:02.900619 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:02.900631 | orchestrator | 2025-10-09 10:25:02.900641 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-10-09 10:25:02.900653 | orchestrator | Thursday 09 October 2025 10:23:39 +0000 (0:00:01.556) 0:00:02.927 ****** 2025-10-09 10:25:02.900664 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:02.900674 | orchestrator | 2025-10-09 10:25:02.900685 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:25:02.900697 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:25:02.900710 | orchestrator | 2025-10-09 10:25:02.900721 | orchestrator | 2025-10-09 10:25:02.900732 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:25:02.900743 | orchestrator | Thursday 09 October 2025 10:23:40 +0000 (0:00:00.744) 0:00:03.672 ****** 2025-10-09 10:25:02.900754 | orchestrator | =============================================================================== 2025-10-09 10:25:02.900765 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.56s 2025-10-09 10:25:02.900776 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.03s 2025-10-09 10:25:02.900787 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.74s 2025-10-09 10:25:02.900798 | orchestrator | 2025-10-09 10:25:02.900809 | orchestrator | 2025-10-09 10:25:02.900820 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-10-09 10:25:02.900831 | orchestrator | 2025-10-09 10:25:02.900842 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-10-09 10:25:02.900852 | orchestrator | Thursday 09 October 2025 10:23:36 +0000 (0:00:00.345) 0:00:00.345 ****** 2025-10-09 10:25:02.900863 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:02.900875 | orchestrator | 2025-10-09 10:25:02.900886 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-10-09 10:25:02.900897 | orchestrator | Thursday 09 October 2025 10:23:37 +0000 (0:00:00.695) 0:00:01.041 ****** 2025-10-09 10:25:02.900908 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:02.900942 | orchestrator | 2025-10-09 10:25:02.900953 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-09 10:25:02.900964 | orchestrator | Thursday 09 October 2025 10:23:38 +0000 (0:00:00.640) 0:00:01.682 ****** 2025-10-09 10:25:02.900975 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-09 10:25:02.900986 | orchestrator | 2025-10-09 10:25:02.900997 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-09 10:25:02.901008 | orchestrator | Thursday 09 October 2025 10:23:39 +0000 (0:00:00.965) 0:00:02.648 ****** 2025-10-09 10:25:02.901018 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:02.901029 | orchestrator | 2025-10-09 10:25:02.901040 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-10-09 10:25:02.901051 | orchestrator | Thursday 09 October 2025 10:23:40 +0000 (0:00:01.540) 0:00:04.188 ****** 2025-10-09 10:25:02.901117 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:02.901169 | orchestrator | 2025-10-09 10:25:02.901181 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-10-09 10:25:02.901192 | orchestrator | Thursday 09 October 2025 10:23:41 +0000 (0:00:01.031) 0:00:05.220 ****** 2025-10-09 10:25:02.901203 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:25:02.901214 | orchestrator | 2025-10-09 10:25:02.901226 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-10-09 10:25:02.901237 | orchestrator | Thursday 09 October 2025 10:23:43 +0000 (0:00:01.817) 0:00:07.037 ****** 2025-10-09 10:25:02.901315 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:25:02.901326 | orchestrator | 2025-10-09 10:25:02.901336 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-10-09 10:25:02.901345 | orchestrator | Thursday 09 October 2025 10:23:44 +0000 (0:00:00.996) 0:00:08.034 ****** 2025-10-09 10:25:02.901355 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:02.901365 | orchestrator | 2025-10-09 10:25:02.901375 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-10-09 10:25:02.901384 | orchestrator | Thursday 09 October 2025 10:23:45 +0000 (0:00:00.435) 0:00:08.469 ****** 2025-10-09 10:25:02.901394 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:02.901404 | orchestrator | 2025-10-09 10:25:02.901413 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:25:02.901423 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:25:02.901433 | orchestrator | 2025-10-09 10:25:02.901443 | orchestrator | 2025-10-09 10:25:02.901452 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:25:02.901462 | orchestrator | Thursday 09 October 2025 10:23:45 +0000 (0:00:00.373) 0:00:08.842 ****** 2025-10-09 10:25:02.901472 | orchestrator | =============================================================================== 2025-10-09 10:25:02.901482 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.82s 2025-10-09 10:25:02.901491 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.54s 2025-10-09 10:25:02.901501 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.03s 2025-10-09 10:25:02.901525 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.00s 2025-10-09 10:25:02.901536 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.97s 2025-10-09 10:25:02.901553 | orchestrator | Get home directory of operator user ------------------------------------- 0.70s 2025-10-09 10:25:02.901563 | orchestrator | Create .kube directory -------------------------------------------------- 0.64s 2025-10-09 10:25:02.901573 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2025-10-09 10:25:02.901582 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.37s 2025-10-09 10:25:02.901592 | orchestrator | 2025-10-09 10:25:02.901601 | orchestrator | 2025-10-09 10:25:02.901611 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-10-09 10:25:02.901631 | orchestrator | 2025-10-09 10:25:02.901641 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-10-09 10:25:02.901651 | orchestrator | Thursday 09 October 2025 10:22:25 +0000 (0:00:00.482) 0:00:00.482 ****** 2025-10-09 10:25:02.901660 | orchestrator | ok: [localhost] => { 2025-10-09 10:25:02.901671 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-10-09 10:25:02.901681 | orchestrator | } 2025-10-09 10:25:02.901691 | orchestrator | 2025-10-09 10:25:02.901701 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-10-09 10:25:02.901711 | orchestrator | Thursday 09 October 2025 10:22:25 +0000 (0:00:00.167) 0:00:00.649 ****** 2025-10-09 10:25:02.901722 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-10-09 10:25:02.901733 | orchestrator | ...ignoring 2025-10-09 10:25:02.901742 | orchestrator | 2025-10-09 10:25:02.901752 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-10-09 10:25:02.901762 | orchestrator | Thursday 09 October 2025 10:22:30 +0000 (0:00:05.228) 0:00:05.878 ****** 2025-10-09 10:25:02.901771 | orchestrator | skipping: [localhost] 2025-10-09 10:25:02.901781 | orchestrator | 2025-10-09 10:25:02.901791 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-10-09 10:25:02.901800 | orchestrator | Thursday 09 October 2025 10:22:30 +0000 (0:00:00.072) 0:00:05.951 ****** 2025-10-09 10:25:02.901810 | orchestrator | ok: [localhost] 2025-10-09 10:25:02.901819 | orchestrator | 2025-10-09 10:25:02.901829 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:25:02.901838 | orchestrator | 2025-10-09 10:25:02.901848 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:25:02.901858 | orchestrator | Thursday 09 October 2025 10:22:31 +0000 (0:00:00.196) 0:00:06.147 ****** 2025-10-09 10:25:02.901867 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:02.901877 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:02.901887 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:02.901896 | orchestrator | 2025-10-09 10:25:02.901906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:25:02.901916 | orchestrator | Thursday 09 October 2025 10:22:31 +0000 (0:00:00.596) 0:00:06.744 ****** 2025-10-09 10:25:02.901925 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-10-09 10:25:02.901935 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-10-09 10:25:02.901945 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-10-09 10:25:02.901954 | orchestrator | 2025-10-09 10:25:02.901964 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-10-09 10:25:02.901974 | orchestrator | 2025-10-09 10:25:02.901983 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-09 10:25:02.901993 | orchestrator | Thursday 09 October 2025 10:22:32 +0000 (0:00:00.925) 0:00:07.669 ****** 2025-10-09 10:25:02.902003 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:02.902013 | orchestrator | 2025-10-09 10:25:02.902078 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-10-09 10:25:02.902088 | orchestrator | Thursday 09 October 2025 10:22:33 +0000 (0:00:00.670) 0:00:08.340 ****** 2025-10-09 10:25:02.902098 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:02.902108 | orchestrator | 2025-10-09 10:25:02.902117 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-10-09 10:25:02.902127 | orchestrator | Thursday 09 October 2025 10:22:34 +0000 (0:00:01.149) 0:00:09.489 ****** 2025-10-09 10:25:02.902137 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:02.902146 | orchestrator | 2025-10-09 10:25:02.902156 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-10-09 10:25:02.902166 | orchestrator | Thursday 09 October 2025 10:22:35 +0000 (0:00:00.767) 0:00:10.257 ****** 2025-10-09 10:25:02.902183 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:02.902192 | orchestrator | 2025-10-09 10:25:02.902202 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-10-09 10:25:02.902212 | orchestrator | Thursday 09 October 2025 10:22:36 +0000 (0:00:01.367) 0:00:11.625 ****** 2025-10-09 10:25:02.902222 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:02.902231 | orchestrator | 2025-10-09 10:25:02.902260 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-10-09 10:25:02.902270 | orchestrator | Thursday 09 October 2025 10:22:37 +0000 (0:00:00.545) 0:00:12.171 ****** 2025-10-09 10:25:02.902280 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:02.902290 | orchestrator | 2025-10-09 10:25:02.902300 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-09 10:25:02.902309 | orchestrator | Thursday 09 October 2025 10:22:37 +0000 (0:00:00.916) 0:00:13.089 ****** 2025-10-09 10:25:02.902319 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:02.902329 | orchestrator | 2025-10-09 10:25:02.902339 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-10-09 10:25:02.902357 | orchestrator | Thursday 09 October 2025 10:22:41 +0000 (0:00:03.796) 0:00:16.886 ****** 2025-10-09 10:25:02.902367 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:02.902377 | orchestrator | 2025-10-09 10:25:02.902386 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-10-09 10:25:02.902401 | orchestrator | Thursday 09 October 2025 10:22:43 +0000 (0:00:01.698) 0:00:18.584 ****** 2025-10-09 10:25:02.902411 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:02.902420 | orchestrator | 2025-10-09 10:25:02.902430 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-10-09 10:25:02.902440 | orchestrator | Thursday 09 October 2025 10:22:44 +0000 (0:00:00.605) 0:00:19.189 ****** 2025-10-09 10:25:02.902449 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:02.902459 | orchestrator | 2025-10-09 10:25:02.902469 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-10-09 10:25:02.902478 | orchestrator | Thursday 09 October 2025 10:22:44 +0000 (0:00:00.459) 0:00:19.649 ****** 2025-10-09 10:25:02.902492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.902509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.902531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.902542 | orchestrator | 2025-10-09 10:25:02.902552 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-10-09 10:25:02.902562 | orchestrator | Thursday 09 October 2025 10:22:45 +0000 (0:00:01.168) 0:00:20.817 ****** 2025-10-09 10:25:02.902584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.902596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.902608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.902624 | orchestrator | 2025-10-09 10:25:02.902635 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-10-09 10:25:02.902644 | orchestrator | Thursday 09 October 2025 10:22:48 +0000 (0:00:02.821) 0:00:23.639 ****** 2025-10-09 10:25:02.902654 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-09 10:25:02.902664 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-09 10:25:02.902674 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-09 10:25:02.902683 | orchestrator | 2025-10-09 10:25:02.902693 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-10-09 10:25:02.902703 | orchestrator | Thursday 09 October 2025 10:22:50 +0000 (0:00:01.791) 0:00:25.430 ****** 2025-10-09 10:25:02.902712 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-09 10:25:02.902722 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-09 10:25:02.902731 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-09 10:25:02.902741 | orchestrator | 2025-10-09 10:25:02.902755 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-10-09 10:25:02.902765 | orchestrator | Thursday 09 October 2025 10:22:53 +0000 (0:00:03.682) 0:00:29.113 ****** 2025-10-09 10:25:02.902779 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-09 10:25:02.902789 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-09 10:25:02.902798 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-09 10:25:02.902808 | orchestrator | 2025-10-09 10:25:02.902818 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-10-09 10:25:02.902827 | orchestrator | Thursday 09 October 2025 10:22:56 +0000 (0:00:02.281) 0:00:31.394 ****** 2025-10-09 10:25:02.902837 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-09 10:25:02.902846 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-09 10:25:02.902856 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-09 10:25:02.902865 | orchestrator | 2025-10-09 10:25:02.902875 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-10-09 10:25:02.902884 | orchestrator | Thursday 09 October 2025 10:22:59 +0000 (0:00:02.979) 0:00:34.373 ****** 2025-10-09 10:25:02.902894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-09 10:25:02.902903 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-09 10:25:02.902913 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-09 10:25:02.902929 | orchestrator | 2025-10-09 10:25:02.902939 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-10-09 10:25:02.902948 | orchestrator | Thursday 09 October 2025 10:23:02 +0000 (0:00:02.905) 0:00:37.280 ****** 2025-10-09 10:25:02.902958 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-09 10:25:02.902967 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-09 10:25:02.902977 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-09 10:25:02.902987 | orchestrator | 2025-10-09 10:25:02.902996 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-09 10:25:02.903006 | orchestrator | Thursday 09 October 2025 10:23:04 +0000 (0:00:02.022) 0:00:39.302 ****** 2025-10-09 10:25:02.903015 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:02.903025 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:02.903034 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:02.903044 | orchestrator | 2025-10-09 10:25:02.903054 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-10-09 10:25:02.903063 | orchestrator | Thursday 09 October 2025 10:23:05 +0000 (0:00:01.297) 0:00:40.600 ****** 2025-10-09 10:25:02.903074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.903097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.903109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:25:02.903135 | orchestrator | 2025-10-09 10:25:02.903145 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-10-09 10:25:02.903155 | orchestrator | Thursday 09 October 2025 10:23:09 +0000 (0:00:03.589) 0:00:44.194 ****** 2025-10-09 10:25:02.903164 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:02.903174 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:02.903183 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:02.903193 | orchestrator | 2025-10-09 10:25:02.903202 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-10-09 10:25:02.903212 | orchestrator | Thursday 09 October 2025 10:23:10 +0000 (0:00:01.768) 0:00:45.963 ****** 2025-10-09 10:25:02.903221 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:02.903231 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:02.903257 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:02.903268 | orchestrator | 2025-10-09 10:25:02.903278 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-10-09 10:25:02.903287 | orchestrator | Thursday 09 October 2025 10:23:18 +0000 (0:00:07.709) 0:00:53.672 ****** 2025-10-09 10:25:02.903297 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:02.903307 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:02.903316 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:02.903326 | orchestrator | 2025-10-09 10:25:02.903335 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-09 10:25:02.903345 | orchestrator | 2025-10-09 10:25:02.903354 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-09 10:25:02.903364 | orchestrator | Thursday 09 October 2025 10:23:19 +0000 (0:00:00.560) 0:00:54.233 ****** 2025-10-09 10:25:02.903374 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:02.903383 | orchestrator | 2025-10-09 10:25:02.903393 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-09 10:25:02.903403 | orchestrator | Thursday 09 October 2025 10:23:20 +0000 (0:00:01.168) 0:00:55.402 ****** 2025-10-09 10:25:02.903412 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:02.903422 | orchestrator | 2025-10-09 10:25:02.903432 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-09 10:25:02.903441 | orchestrator | Thursday 09 October 2025 10:23:20 +0000 (0:00:00.581) 0:00:55.983 ****** 2025-10-09 10:25:02.903451 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:02.903460 | orchestrator | 2025-10-09 10:25:02.903470 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-09 10:25:02.903480 | orchestrator | Thursday 09 October 2025 10:23:22 +0000 (0:00:01.888) 0:00:57.871 ****** 2025-10-09 10:25:02.903489 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:02.903499 | orchestrator | 2025-10-09 10:25:02.903509 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-09 10:25:02.903518 | orchestrator | 2025-10-09 10:25:02.903528 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-09 10:25:02.903538 | orchestrator | Thursday 09 October 2025 10:24:20 +0000 (0:00:58.194) 0:01:56.066 ****** 2025-10-09 10:25:02.903547 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:02.903557 | orchestrator | 2025-10-09 10:25:02.903567 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-09 10:25:02.903576 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:00.739) 0:01:56.806 ****** 2025-10-09 10:25:02.903592 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:02.903602 | orchestrator | 2025-10-09 10:25:02.903611 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-09 10:25:02.903621 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:00.298) 0:01:57.104 ****** 2025-10-09 10:25:02.903631 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:02.903640 | orchestrator | 2025-10-09 10:25:02.903650 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-09 10:25:02.903660 | orchestrator | Thursday 09 October 2025 10:24:28 +0000 (0:00:06.685) 0:02:03.790 ****** 2025-10-09 10:25:02.903669 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:02.903679 | orchestrator | 2025-10-09 10:25:02.903689 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-09 10:25:02.903698 | orchestrator | 2025-10-09 10:25:02.903708 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-09 10:25:02.903723 | orchestrator | Thursday 09 October 2025 10:24:39 +0000 (0:00:11.125) 0:02:14.915 ****** 2025-10-09 10:25:02.903777 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:02.903835 | orchestrator | 2025-10-09 10:25:02.903851 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-09 10:25:02.903861 | orchestrator | Thursday 09 October 2025 10:24:40 +0000 (0:00:00.608) 0:02:15.523 ****** 2025-10-09 10:25:02.903871 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:02.903881 | orchestrator | 2025-10-09 10:25:02.903890 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-09 10:25:02.903900 | orchestrator | Thursday 09 October 2025 10:24:40 +0000 (0:00:00.245) 0:02:15.769 ****** 2025-10-09 10:25:02.903909 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:02.903919 | orchestrator | 2025-10-09 10:25:02.903929 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-09 10:25:02.903939 | orchestrator | Thursday 09 October 2025 10:24:42 +0000 (0:00:01.711) 0:02:17.480 ****** 2025-10-09 10:25:02.903948 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:02.903958 | orchestrator | 2025-10-09 10:25:02.903968 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-10-09 10:25:02.903977 | orchestrator | 2025-10-09 10:25:02.903987 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-10-09 10:25:02.903997 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:15.719) 0:02:33.200 ****** 2025-10-09 10:25:02.904006 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:02.904016 | orchestrator | 2025-10-09 10:25:02.904025 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-10-09 10:25:02.904035 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:00.705) 0:02:33.906 ****** 2025-10-09 10:25:02.904045 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-09 10:25:02.904055 | orchestrator | enable_outward_rabbitmq_True 2025-10-09 10:25:02.904064 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-09 10:25:02.904074 | orchestrator | outward_rabbitmq_restart 2025-10-09 10:25:02.904084 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:02.904093 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:02.904103 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:02.904113 | orchestrator | 2025-10-09 10:25:02.904123 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-10-09 10:25:02.904132 | orchestrator | skipping: no hosts matched 2025-10-09 10:25:02.904142 | orchestrator | 2025-10-09 10:25:02.904152 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-10-09 10:25:02.904162 | orchestrator | skipping: no hosts matched 2025-10-09 10:25:02.904171 | orchestrator | 2025-10-09 10:25:02.904181 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-10-09 10:25:02.904191 | orchestrator | skipping: no hosts matched 2025-10-09 10:25:02.904200 | orchestrator | 2025-10-09 10:25:02.904210 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:25:02.904227 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-10-09 10:25:02.904260 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:25:02.904271 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:25:02.904281 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:25:02.904291 | orchestrator | 2025-10-09 10:25:02.904300 | orchestrator | 2025-10-09 10:25:02.904310 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:25:02.904320 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:02.657) 0:02:36.563 ****** 2025-10-09 10:25:02.904329 | orchestrator | =============================================================================== 2025-10-09 10:25:02.904339 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.04s 2025-10-09 10:25:02.904349 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.29s 2025-10-09 10:25:02.904358 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.71s 2025-10-09 10:25:02.904368 | orchestrator | Check RabbitMQ service -------------------------------------------------- 5.23s 2025-10-09 10:25:02.904378 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.80s 2025-10-09 10:25:02.904387 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.68s 2025-10-09 10:25:02.904397 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 3.59s 2025-10-09 10:25:02.904407 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.98s 2025-10-09 10:25:02.904416 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.91s 2025-10-09 10:25:02.904426 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.82s 2025-10-09 10:25:02.904436 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.66s 2025-10-09 10:25:02.904445 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.52s 2025-10-09 10:25:02.904455 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.28s 2025-10-09 10:25:02.904464 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.02s 2025-10-09 10:25:02.904474 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.79s 2025-10-09 10:25:02.904490 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.77s 2025-10-09 10:25:02.904500 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.70s 2025-10-09 10:25:02.904510 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.37s 2025-10-09 10:25:02.904520 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.30s 2025-10-09 10:25:02.904530 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.17s 2025-10-09 10:25:02.904539 | orchestrator | 2025-10-09 10:25:02 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:02.904639 | orchestrator | 2025-10-09 10:25:02 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:02.906498 | orchestrator | 2025-10-09 10:25:02 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:02.906519 | orchestrator | 2025-10-09 10:25:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:06.051793 | orchestrator | 2025-10-09 10:25:06 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:06.053992 | orchestrator | 2025-10-09 10:25:06 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:06.055894 | orchestrator | 2025-10-09 10:25:06 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:06.055960 | orchestrator | 2025-10-09 10:25:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:09.092067 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:09.092137 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:09.092862 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:09.092883 | orchestrator | 2025-10-09 10:25:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:12.128766 | orchestrator | 2025-10-09 10:25:12 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:12.129498 | orchestrator | 2025-10-09 10:25:12 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:12.132161 | orchestrator | 2025-10-09 10:25:12 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:12.132294 | orchestrator | 2025-10-09 10:25:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:15.167613 | orchestrator | 2025-10-09 10:25:15 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:15.168851 | orchestrator | 2025-10-09 10:25:15 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:15.171378 | orchestrator | 2025-10-09 10:25:15 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:15.172916 | orchestrator | 2025-10-09 10:25:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:18.215196 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:18.225697 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:18.225735 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:18.225748 | orchestrator | 2025-10-09 10:25:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:21.263727 | orchestrator | 2025-10-09 10:25:21 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:21.265965 | orchestrator | 2025-10-09 10:25:21 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:21.266831 | orchestrator | 2025-10-09 10:25:21 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:21.266956 | orchestrator | 2025-10-09 10:25:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:24.303529 | orchestrator | 2025-10-09 10:25:24 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:24.304383 | orchestrator | 2025-10-09 10:25:24 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:24.305637 | orchestrator | 2025-10-09 10:25:24 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:24.305662 | orchestrator | 2025-10-09 10:25:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:27.332067 | orchestrator | 2025-10-09 10:25:27 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:27.332309 | orchestrator | 2025-10-09 10:25:27 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:27.334385 | orchestrator | 2025-10-09 10:25:27 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:27.334432 | orchestrator | 2025-10-09 10:25:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:30.383325 | orchestrator | 2025-10-09 10:25:30 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:30.386796 | orchestrator | 2025-10-09 10:25:30 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:30.388083 | orchestrator | 2025-10-09 10:25:30 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:30.388108 | orchestrator | 2025-10-09 10:25:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:33.435446 | orchestrator | 2025-10-09 10:25:33 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:33.436782 | orchestrator | 2025-10-09 10:25:33 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:33.438170 | orchestrator | 2025-10-09 10:25:33 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:33.439374 | orchestrator | 2025-10-09 10:25:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:36.468467 | orchestrator | 2025-10-09 10:25:36 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:36.470794 | orchestrator | 2025-10-09 10:25:36 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:36.473215 | orchestrator | 2025-10-09 10:25:36 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state STARTED 2025-10-09 10:25:36.473662 | orchestrator | 2025-10-09 10:25:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:39.506893 | orchestrator | 2025-10-09 10:25:39 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:39.507837 | orchestrator | 2025-10-09 10:25:39 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:39.510901 | orchestrator | 2025-10-09 10:25:39 | INFO  | Task 55d89daf-0c25-44ae-bd53-c32690b8209d is in state SUCCESS 2025-10-09 10:25:39.510932 | orchestrator | 2025-10-09 10:25:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:39.512733 | orchestrator | 2025-10-09 10:25:39.512773 | orchestrator | 2025-10-09 10:25:39.512786 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:25:39.512798 | orchestrator | 2025-10-09 10:25:39.512809 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:25:39.512821 | orchestrator | Thursday 09 October 2025 10:23:28 +0000 (0:00:00.270) 0:00:00.270 ****** 2025-10-09 10:25:39.512833 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:39.512847 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:39.513170 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:39.513190 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.513201 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.513212 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.513223 | orchestrator | 2025-10-09 10:25:39.513234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:25:39.513286 | orchestrator | Thursday 09 October 2025 10:23:29 +0000 (0:00:00.892) 0:00:01.162 ****** 2025-10-09 10:25:39.513298 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-10-09 10:25:39.513310 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-10-09 10:25:39.513321 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-10-09 10:25:39.513334 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-10-09 10:25:39.513346 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-10-09 10:25:39.513358 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-10-09 10:25:39.513396 | orchestrator | 2025-10-09 10:25:39.513409 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-10-09 10:25:39.513421 | orchestrator | 2025-10-09 10:25:39.513434 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-10-09 10:25:39.513446 | orchestrator | Thursday 09 October 2025 10:23:30 +0000 (0:00:01.394) 0:00:02.557 ****** 2025-10-09 10:25:39.513459 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:39.513473 | orchestrator | 2025-10-09 10:25:39.513485 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-10-09 10:25:39.513497 | orchestrator | Thursday 09 October 2025 10:23:33 +0000 (0:00:02.858) 0:00:05.415 ****** 2025-10-09 10:25:39.513512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513543 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513622 | orchestrator | 2025-10-09 10:25:39.513635 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-10-09 10:25:39.513647 | orchestrator | Thursday 09 October 2025 10:23:36 +0000 (0:00:03.196) 0:00:08.611 ****** 2025-10-09 10:25:39.513659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513743 | orchestrator | 2025-10-09 10:25:39.513754 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-10-09 10:25:39.513765 | orchestrator | Thursday 09 October 2025 10:23:39 +0000 (0:00:03.151) 0:00:11.763 ****** 2025-10-09 10:25:39.513776 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513858 | orchestrator | 2025-10-09 10:25:39.513869 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-10-09 10:25:39.513880 | orchestrator | Thursday 09 October 2025 10:23:41 +0000 (0:00:01.965) 0:00:13.729 ****** 2025-10-09 10:25:39.513896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.513978 | orchestrator | 2025-10-09 10:25:39.513989 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-10-09 10:25:39.514000 | orchestrator | Thursday 09 October 2025 10:23:43 +0000 (0:00:02.070) 0:00:15.800 ****** 2025-10-09 10:25:39.514011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.514072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.514083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.514100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.514111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.514123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.514133 | orchestrator | 2025-10-09 10:25:39.514144 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-10-09 10:25:39.514156 | orchestrator | Thursday 09 October 2025 10:23:45 +0000 (0:00:01.915) 0:00:17.715 ****** 2025-10-09 10:25:39.514167 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:39.514187 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:39.514198 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:39.514209 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:39.514220 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.514231 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:39.514270 | orchestrator | 2025-10-09 10:25:39.514281 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-10-09 10:25:39.514293 | orchestrator | Thursday 09 October 2025 10:23:48 +0000 (0:00:02.606) 0:00:20.322 ****** 2025-10-09 10:25:39.514304 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-10-09 10:25:39.514316 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-10-09 10:25:39.514335 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-10-09 10:25:39.514363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-10-09 10:25:39.514382 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-10-09 10:25:39.514400 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-10-09 10:25:39.514418 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:25:39.514432 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:25:39.514443 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:25:39.514453 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:25:39.514464 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:25:39.514475 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:25:39.514486 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:25:39.514498 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:25:39.514509 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:25:39.514520 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:25:39.514531 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:25:39.514542 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:25:39.514553 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:25:39.514565 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:25:39.514578 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:25:39.514606 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:25:39.514624 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:25:39.514643 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:25:39.514654 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:25:39.514672 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:25:39.514683 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:25:39.514694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:25:39.514704 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:25:39.514715 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:25:39.514726 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:25:39.514736 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:25:39.514747 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:25:39.514758 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:25:39.514768 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:25:39.514779 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:25:39.514790 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-09 10:25:39.514801 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-09 10:25:39.514814 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-09 10:25:39.514834 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-09 10:25:39.514859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-09 10:25:39.514877 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-10-09 10:25:39.514889 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-09 10:25:39.514900 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-10-09 10:25:39.514911 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-10-09 10:25:39.514922 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-10-09 10:25:39.514933 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-10-09 10:25:39.514943 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-09 10:25:39.514954 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-09 10:25:39.514965 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-10-09 10:25:39.514976 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-09 10:25:39.514986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-09 10:25:39.514997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-09 10:25:39.515008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-09 10:25:39.515026 | orchestrator | 2025-10-09 10:25:39.515037 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:25:39.515047 | orchestrator | Thursday 09 October 2025 10:24:09 +0000 (0:00:21.709) 0:00:42.031 ****** 2025-10-09 10:25:39.515062 | orchestrator | 2025-10-09 10:25:39.515082 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:25:39.515100 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:00.393) 0:00:42.424 ****** 2025-10-09 10:25:39.515116 | orchestrator | 2025-10-09 10:25:39.515133 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:25:39.515144 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:00.161) 0:00:42.585 ****** 2025-10-09 10:25:39.515155 | orchestrator | 2025-10-09 10:25:39.515166 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:25:39.515176 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:00.095) 0:00:42.680 ****** 2025-10-09 10:25:39.515187 | orchestrator | 2025-10-09 10:25:39.515198 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:25:39.515209 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:00.077) 0:00:42.758 ****** 2025-10-09 10:25:39.515219 | orchestrator | 2025-10-09 10:25:39.515230 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:25:39.515265 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:00.073) 0:00:42.832 ****** 2025-10-09 10:25:39.515276 | orchestrator | 2025-10-09 10:25:39.515287 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-10-09 10:25:39.515298 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:00.085) 0:00:42.918 ****** 2025-10-09 10:25:39.515309 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:39.515320 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:39.515331 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.515341 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.515352 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.515363 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:39.515373 | orchestrator | 2025-10-09 10:25:39.515384 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-10-09 10:25:39.515395 | orchestrator | Thursday 09 October 2025 10:24:12 +0000 (0:00:01.938) 0:00:44.856 ****** 2025-10-09 10:25:39.515406 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.515417 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:39.515428 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:39.515439 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:39.515450 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:39.515460 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:39.515471 | orchestrator | 2025-10-09 10:25:39.515482 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-10-09 10:25:39.515492 | orchestrator | 2025-10-09 10:25:39.515503 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-09 10:25:39.515514 | orchestrator | Thursday 09 October 2025 10:24:16 +0000 (0:00:03.856) 0:00:48.713 ****** 2025-10-09 10:25:39.515524 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:39.515535 | orchestrator | 2025-10-09 10:25:39.515546 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-09 10:25:39.515557 | orchestrator | Thursday 09 October 2025 10:24:17 +0000 (0:00:00.948) 0:00:49.661 ****** 2025-10-09 10:25:39.515567 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:39.515578 | orchestrator | 2025-10-09 10:25:39.515596 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-10-09 10:25:39.515607 | orchestrator | Thursday 09 October 2025 10:24:18 +0000 (0:00:00.578) 0:00:50.240 ****** 2025-10-09 10:25:39.515618 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.515636 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.515647 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.515658 | orchestrator | 2025-10-09 10:25:39.515669 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-10-09 10:25:39.515679 | orchestrator | Thursday 09 October 2025 10:24:19 +0000 (0:00:01.125) 0:00:51.365 ****** 2025-10-09 10:25:39.515690 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.515701 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.515711 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.515722 | orchestrator | 2025-10-09 10:25:39.515733 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-10-09 10:25:39.515744 | orchestrator | Thursday 09 October 2025 10:24:19 +0000 (0:00:00.376) 0:00:51.742 ****** 2025-10-09 10:25:39.515754 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.515765 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.515775 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.515786 | orchestrator | 2025-10-09 10:25:39.515797 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-10-09 10:25:39.515808 | orchestrator | Thursday 09 October 2025 10:24:20 +0000 (0:00:00.373) 0:00:52.116 ****** 2025-10-09 10:25:39.515818 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.515829 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.515839 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.515850 | orchestrator | 2025-10-09 10:25:39.515861 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-10-09 10:25:39.515872 | orchestrator | Thursday 09 October 2025 10:24:20 +0000 (0:00:00.383) 0:00:52.499 ****** 2025-10-09 10:25:39.515882 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.515893 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.515904 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.515914 | orchestrator | 2025-10-09 10:25:39.515925 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-10-09 10:25:39.515936 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:00.702) 0:00:53.201 ****** 2025-10-09 10:25:39.515947 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.515957 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.515968 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.515979 | orchestrator | 2025-10-09 10:25:39.515989 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-10-09 10:25:39.516000 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:00.433) 0:00:53.635 ****** 2025-10-09 10:25:39.516011 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516022 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516032 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516043 | orchestrator | 2025-10-09 10:25:39.516053 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-10-09 10:25:39.516064 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:00.359) 0:00:53.995 ****** 2025-10-09 10:25:39.516075 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516096 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516108 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516118 | orchestrator | 2025-10-09 10:25:39.516129 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-10-09 10:25:39.516140 | orchestrator | Thursday 09 October 2025 10:24:22 +0000 (0:00:00.314) 0:00:54.310 ****** 2025-10-09 10:25:39.516150 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516161 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516172 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516182 | orchestrator | 2025-10-09 10:25:39.516193 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-10-09 10:25:39.516204 | orchestrator | Thursday 09 October 2025 10:24:22 +0000 (0:00:00.712) 0:00:55.022 ****** 2025-10-09 10:25:39.516215 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516226 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516287 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516300 | orchestrator | 2025-10-09 10:25:39.516312 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-10-09 10:25:39.516324 | orchestrator | Thursday 09 October 2025 10:24:23 +0000 (0:00:00.559) 0:00:55.582 ****** 2025-10-09 10:25:39.516335 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516346 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516357 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516368 | orchestrator | 2025-10-09 10:25:39.516379 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-10-09 10:25:39.516391 | orchestrator | Thursday 09 October 2025 10:24:24 +0000 (0:00:00.650) 0:00:56.233 ****** 2025-10-09 10:25:39.516402 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516413 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516424 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516435 | orchestrator | 2025-10-09 10:25:39.516446 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-10-09 10:25:39.516458 | orchestrator | Thursday 09 October 2025 10:24:24 +0000 (0:00:00.355) 0:00:56.589 ****** 2025-10-09 10:25:39.516469 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516480 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516491 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516502 | orchestrator | 2025-10-09 10:25:39.516513 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-10-09 10:25:39.516525 | orchestrator | Thursday 09 October 2025 10:24:24 +0000 (0:00:00.351) 0:00:56.940 ****** 2025-10-09 10:25:39.516536 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516547 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516558 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516569 | orchestrator | 2025-10-09 10:25:39.516580 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-10-09 10:25:39.516592 | orchestrator | Thursday 09 October 2025 10:24:25 +0000 (0:00:00.901) 0:00:57.842 ****** 2025-10-09 10:25:39.516603 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516614 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516625 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516636 | orchestrator | 2025-10-09 10:25:39.516654 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-10-09 10:25:39.516666 | orchestrator | Thursday 09 October 2025 10:24:26 +0000 (0:00:00.397) 0:00:58.240 ****** 2025-10-09 10:25:39.516677 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516688 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516699 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516710 | orchestrator | 2025-10-09 10:25:39.516722 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-10-09 10:25:39.516732 | orchestrator | Thursday 09 October 2025 10:24:26 +0000 (0:00:00.355) 0:00:58.595 ****** 2025-10-09 10:25:39.516743 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.516754 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.516766 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.516776 | orchestrator | 2025-10-09 10:25:39.516787 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-09 10:25:39.516798 | orchestrator | Thursday 09 October 2025 10:24:26 +0000 (0:00:00.311) 0:00:58.907 ****** 2025-10-09 10:25:39.516809 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:39.516820 | orchestrator | 2025-10-09 10:25:39.516832 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-10-09 10:25:39.516843 | orchestrator | Thursday 09 October 2025 10:24:27 +0000 (0:00:00.901) 0:00:59.808 ****** 2025-10-09 10:25:39.516854 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.516865 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.516876 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.516887 | orchestrator | 2025-10-09 10:25:39.516898 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-10-09 10:25:39.516917 | orchestrator | Thursday 09 October 2025 10:24:28 +0000 (0:00:00.753) 0:01:00.561 ****** 2025-10-09 10:25:39.516928 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.516939 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.516949 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.516960 | orchestrator | 2025-10-09 10:25:39.516972 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-10-09 10:25:39.516983 | orchestrator | Thursday 09 October 2025 10:24:29 +0000 (0:00:00.554) 0:01:01.116 ****** 2025-10-09 10:25:39.516994 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.517006 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.517016 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.517028 | orchestrator | 2025-10-09 10:25:39.517039 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-10-09 10:25:39.517050 | orchestrator | Thursday 09 October 2025 10:24:29 +0000 (0:00:00.610) 0:01:01.726 ****** 2025-10-09 10:25:39.517061 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.517072 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.517083 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.517093 | orchestrator | 2025-10-09 10:25:39.517105 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-10-09 10:25:39.517115 | orchestrator | Thursday 09 October 2025 10:24:30 +0000 (0:00:00.367) 0:01:02.094 ****** 2025-10-09 10:25:39.517132 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.517144 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.517154 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.517165 | orchestrator | 2025-10-09 10:25:39.517176 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-10-09 10:25:39.517187 | orchestrator | Thursday 09 October 2025 10:24:30 +0000 (0:00:00.398) 0:01:02.492 ****** 2025-10-09 10:25:39.517198 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.517209 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.517220 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.517231 | orchestrator | 2025-10-09 10:25:39.517256 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-10-09 10:25:39.517268 | orchestrator | Thursday 09 October 2025 10:24:30 +0000 (0:00:00.365) 0:01:02.858 ****** 2025-10-09 10:25:39.517280 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.517291 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.517303 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.517314 | orchestrator | 2025-10-09 10:25:39.517325 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-10-09 10:25:39.517336 | orchestrator | Thursday 09 October 2025 10:24:31 +0000 (0:00:00.564) 0:01:03.423 ****** 2025-10-09 10:25:39.517348 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.517359 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.517370 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.517381 | orchestrator | 2025-10-09 10:25:39.517392 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-10-09 10:25:39.517403 | orchestrator | Thursday 09 October 2025 10:24:31 +0000 (0:00:00.346) 0:01:03.769 ****** 2025-10-09 10:25:39.517415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517548 | orchestrator | 2025-10-09 10:25:39.517559 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-10-09 10:25:39.517571 | orchestrator | Thursday 09 October 2025 10:24:33 +0000 (0:00:01.491) 0:01:05.260 ****** 2025-10-09 10:25:39.517582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517707 | orchestrator | 2025-10-09 10:25:39.517719 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-10-09 10:25:39.517730 | orchestrator | Thursday 09 October 2025 10:24:37 +0000 (0:00:03.994) 0:01:09.255 ****** 2025-10-09 10:25:39.517742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.517867 | orchestrator | 2025-10-09 10:25:39.517879 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:25:39.517890 | orchestrator | Thursday 09 October 2025 10:24:39 +0000 (0:00:01.991) 0:01:11.247 ****** 2025-10-09 10:25:39.517902 | orchestrator | 2025-10-09 10:25:39.517913 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:25:39.517925 | orchestrator | Thursday 09 October 2025 10:24:39 +0000 (0:00:00.306) 0:01:11.553 ****** 2025-10-09 10:25:39.517935 | orchestrator | 2025-10-09 10:25:39.517947 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:25:39.517958 | orchestrator | Thursday 09 October 2025 10:24:39 +0000 (0:00:00.076) 0:01:11.629 ****** 2025-10-09 10:25:39.517974 | orchestrator | 2025-10-09 10:25:39.517986 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-10-09 10:25:39.517997 | orchestrator | Thursday 09 October 2025 10:24:39 +0000 (0:00:00.071) 0:01:11.701 ****** 2025-10-09 10:25:39.518008 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.518050 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:39.518063 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:39.518074 | orchestrator | 2025-10-09 10:25:39.518085 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-10-09 10:25:39.518095 | orchestrator | Thursday 09 October 2025 10:24:47 +0000 (0:00:07.664) 0:01:19.365 ****** 2025-10-09 10:25:39.518106 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.518117 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:39.518127 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:39.518138 | orchestrator | 2025-10-09 10:25:39.518148 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-10-09 10:25:39.518159 | orchestrator | Thursday 09 October 2025 10:24:49 +0000 (0:00:02.310) 0:01:21.676 ****** 2025-10-09 10:25:39.518170 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:39.518181 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:39.518191 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.518202 | orchestrator | 2025-10-09 10:25:39.518213 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-10-09 10:25:39.518224 | orchestrator | Thursday 09 October 2025 10:24:56 +0000 (0:00:06.631) 0:01:28.307 ****** 2025-10-09 10:25:39.518234 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.518292 | orchestrator | 2025-10-09 10:25:39.518303 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-10-09 10:25:39.518314 | orchestrator | Thursday 09 October 2025 10:24:56 +0000 (0:00:00.405) 0:01:28.713 ****** 2025-10-09 10:25:39.518325 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.518336 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.518347 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.518358 | orchestrator | 2025-10-09 10:25:39.518376 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-10-09 10:25:39.518387 | orchestrator | Thursday 09 October 2025 10:24:57 +0000 (0:00:00.829) 0:01:29.542 ****** 2025-10-09 10:25:39.518398 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.518409 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.518420 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.518430 | orchestrator | 2025-10-09 10:25:39.518441 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-10-09 10:25:39.518452 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:00.813) 0:01:30.356 ****** 2025-10-09 10:25:39.518463 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.518474 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.518484 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.518495 | orchestrator | 2025-10-09 10:25:39.518506 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-10-09 10:25:39.518517 | orchestrator | Thursday 09 October 2025 10:24:59 +0000 (0:00:00.892) 0:01:31.249 ****** 2025-10-09 10:25:39.518528 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.518539 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.518549 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.518560 | orchestrator | 2025-10-09 10:25:39.518571 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-10-09 10:25:39.518582 | orchestrator | Thursday 09 October 2025 10:24:59 +0000 (0:00:00.626) 0:01:31.875 ****** 2025-10-09 10:25:39.518592 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.518603 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.518613 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.518623 | orchestrator | 2025-10-09 10:25:39.518632 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-10-09 10:25:39.518642 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:01.205) 0:01:33.080 ****** 2025-10-09 10:25:39.518658 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.518668 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.518677 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.518687 | orchestrator | 2025-10-09 10:25:39.518696 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-10-09 10:25:39.518706 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:00.859) 0:01:33.940 ****** 2025-10-09 10:25:39.518715 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.518725 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.518734 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.518744 | orchestrator | 2025-10-09 10:25:39.518753 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-10-09 10:25:39.518763 | orchestrator | Thursday 09 October 2025 10:25:02 +0000 (0:00:00.306) 0:01:34.247 ****** 2025-10-09 10:25:39.518773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518788 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518799 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518809 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518820 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518830 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518845 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518856 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518871 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518881 | orchestrator | 2025-10-09 10:25:39.518891 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-10-09 10:25:39.518901 | orchestrator | Thursday 09 October 2025 10:25:03 +0000 (0:00:01.428) 0:01:35.675 ****** 2025-10-09 10:25:39.518911 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518925 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518945 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.518997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519028 | orchestrator | 2025-10-09 10:25:39.519037 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-10-09 10:25:39.519047 | orchestrator | Thursday 09 October 2025 10:25:09 +0000 (0:00:06.108) 0:01:41.784 ****** 2025-10-09 10:25:39.519057 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519066 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519077 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519097 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519140 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:25:39.519191 | orchestrator | 2025-10-09 10:25:39.519201 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:25:39.519210 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:02.879) 0:01:44.663 ****** 2025-10-09 10:25:39.519220 | orchestrator | 2025-10-09 10:25:39.519229 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:25:39.519254 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:00.063) 0:01:44.726 ****** 2025-10-09 10:25:39.519264 | orchestrator | 2025-10-09 10:25:39.519274 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:25:39.519283 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:00.061) 0:01:44.787 ****** 2025-10-09 10:25:39.519293 | orchestrator | 2025-10-09 10:25:39.519302 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-10-09 10:25:39.519312 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:00.071) 0:01:44.858 ****** 2025-10-09 10:25:39.519321 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:39.519331 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:39.519341 | orchestrator | 2025-10-09 10:25:39.519350 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-10-09 10:25:39.519360 | orchestrator | Thursday 09 October 2025 10:25:18 +0000 (0:00:06.129) 0:01:50.988 ****** 2025-10-09 10:25:39.519370 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:39.519379 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:39.519389 | orchestrator | 2025-10-09 10:25:39.519398 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-10-09 10:25:39.519408 | orchestrator | Thursday 09 October 2025 10:25:25 +0000 (0:00:06.163) 0:01:57.151 ****** 2025-10-09 10:25:39.519417 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:39.519427 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:39.519436 | orchestrator | 2025-10-09 10:25:39.519446 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-10-09 10:25:39.519455 | orchestrator | Thursday 09 October 2025 10:25:31 +0000 (0:00:06.648) 0:02:03.800 ****** 2025-10-09 10:25:39.519465 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:39.519474 | orchestrator | 2025-10-09 10:25:39.519488 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-10-09 10:25:39.519498 | orchestrator | Thursday 09 October 2025 10:25:31 +0000 (0:00:00.170) 0:02:03.971 ****** 2025-10-09 10:25:39.519507 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.519517 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.519526 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.519536 | orchestrator | 2025-10-09 10:25:39.519545 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-10-09 10:25:39.519555 | orchestrator | Thursday 09 October 2025 10:25:32 +0000 (0:00:00.822) 0:02:04.793 ****** 2025-10-09 10:25:39.519565 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.519574 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.519584 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.519593 | orchestrator | 2025-10-09 10:25:39.519603 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-10-09 10:25:39.519622 | orchestrator | Thursday 09 October 2025 10:25:33 +0000 (0:00:00.690) 0:02:05.484 ****** 2025-10-09 10:25:39.519632 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.519641 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.519651 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.519660 | orchestrator | 2025-10-09 10:25:39.519670 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-10-09 10:25:39.519679 | orchestrator | Thursday 09 October 2025 10:25:34 +0000 (0:00:00.877) 0:02:06.361 ****** 2025-10-09 10:25:39.519689 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:39.519698 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:39.519708 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:39.519717 | orchestrator | 2025-10-09 10:25:39.519727 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-10-09 10:25:39.519736 | orchestrator | Thursday 09 October 2025 10:25:34 +0000 (0:00:00.666) 0:02:07.027 ****** 2025-10-09 10:25:39.519746 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.519756 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.519765 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.519774 | orchestrator | 2025-10-09 10:25:39.519784 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-10-09 10:25:39.519794 | orchestrator | Thursday 09 October 2025 10:25:35 +0000 (0:00:00.901) 0:02:07.928 ****** 2025-10-09 10:25:39.519803 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:39.519813 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:39.519822 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:39.519832 | orchestrator | 2025-10-09 10:25:39.519841 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:25:39.519851 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-10-09 10:25:39.519861 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-10-09 10:25:39.519877 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-10-09 10:25:39.519887 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:25:39.519897 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:25:39.519907 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:25:39.519916 | orchestrator | 2025-10-09 10:25:39.519926 | orchestrator | 2025-10-09 10:25:39.519936 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:25:39.519945 | orchestrator | Thursday 09 October 2025 10:25:36 +0000 (0:00:00.925) 0:02:08.854 ****** 2025-10-09 10:25:39.519955 | orchestrator | =============================================================================== 2025-10-09 10:25:39.519965 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.71s 2025-10-09 10:25:39.519974 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.79s 2025-10-09 10:25:39.519984 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.28s 2025-10-09 10:25:39.519993 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.47s 2025-10-09 10:25:39.520003 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.11s 2025-10-09 10:25:39.520012 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.99s 2025-10-09 10:25:39.520022 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 3.86s 2025-10-09 10:25:39.520032 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 3.20s 2025-10-09 10:25:39.520047 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.15s 2025-10-09 10:25:39.520057 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.88s 2025-10-09 10:25:39.520066 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.86s 2025-10-09 10:25:39.520076 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.61s 2025-10-09 10:25:39.520086 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.07s 2025-10-09 10:25:39.520095 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.99s 2025-10-09 10:25:39.520105 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.97s 2025-10-09 10:25:39.520118 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.94s 2025-10-09 10:25:39.520128 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.92s 2025-10-09 10:25:39.520138 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2025-10-09 10:25:39.520147 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2025-10-09 10:25:39.520157 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.39s 2025-10-09 10:25:42.547343 | orchestrator | 2025-10-09 10:25:42 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:42.552399 | orchestrator | 2025-10-09 10:25:42 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:42.552440 | orchestrator | 2025-10-09 10:25:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:45.595324 | orchestrator | 2025-10-09 10:25:45 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:45.596912 | orchestrator | 2025-10-09 10:25:45 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:45.596942 | orchestrator | 2025-10-09 10:25:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:48.647420 | orchestrator | 2025-10-09 10:25:48 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:48.649742 | orchestrator | 2025-10-09 10:25:48 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:48.649965 | orchestrator | 2025-10-09 10:25:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:51.691235 | orchestrator | 2025-10-09 10:25:51 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:51.692621 | orchestrator | 2025-10-09 10:25:51 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:51.692658 | orchestrator | 2025-10-09 10:25:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:54.747289 | orchestrator | 2025-10-09 10:25:54 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:54.749754 | orchestrator | 2025-10-09 10:25:54 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:54.750143 | orchestrator | 2025-10-09 10:25:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:57.807712 | orchestrator | 2025-10-09 10:25:57 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:25:57.808359 | orchestrator | 2025-10-09 10:25:57 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:25:57.808394 | orchestrator | 2025-10-09 10:25:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:00.843646 | orchestrator | 2025-10-09 10:26:00 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:00.844218 | orchestrator | 2025-10-09 10:26:00 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:00.844308 | orchestrator | 2025-10-09 10:26:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:03.895429 | orchestrator | 2025-10-09 10:26:03 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:03.895522 | orchestrator | 2025-10-09 10:26:03 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:03.895536 | orchestrator | 2025-10-09 10:26:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:06.947743 | orchestrator | 2025-10-09 10:26:06 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:06.948513 | orchestrator | 2025-10-09 10:26:06 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:06.948547 | orchestrator | 2025-10-09 10:26:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:10.005027 | orchestrator | 2025-10-09 10:26:10 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:10.012046 | orchestrator | 2025-10-09 10:26:10 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:10.012074 | orchestrator | 2025-10-09 10:26:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:13.064587 | orchestrator | 2025-10-09 10:26:13 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:13.067533 | orchestrator | 2025-10-09 10:26:13 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:13.067563 | orchestrator | 2025-10-09 10:26:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:16.115060 | orchestrator | 2025-10-09 10:26:16 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:16.115914 | orchestrator | 2025-10-09 10:26:16 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:16.115956 | orchestrator | 2025-10-09 10:26:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:19.155084 | orchestrator | 2025-10-09 10:26:19 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:19.156738 | orchestrator | 2025-10-09 10:26:19 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:19.156767 | orchestrator | 2025-10-09 10:26:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:22.200185 | orchestrator | 2025-10-09 10:26:22 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:22.202534 | orchestrator | 2025-10-09 10:26:22 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:22.202740 | orchestrator | 2025-10-09 10:26:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:25.256292 | orchestrator | 2025-10-09 10:26:25 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:25.256549 | orchestrator | 2025-10-09 10:26:25 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:25.256573 | orchestrator | 2025-10-09 10:26:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:28.304045 | orchestrator | 2025-10-09 10:26:28 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:28.305873 | orchestrator | 2025-10-09 10:26:28 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:28.306092 | orchestrator | 2025-10-09 10:26:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:31.347463 | orchestrator | 2025-10-09 10:26:31 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:31.348350 | orchestrator | 2025-10-09 10:26:31 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:31.348380 | orchestrator | 2025-10-09 10:26:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:34.404590 | orchestrator | 2025-10-09 10:26:34 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:34.405040 | orchestrator | 2025-10-09 10:26:34 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:34.405136 | orchestrator | 2025-10-09 10:26:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:37.453004 | orchestrator | 2025-10-09 10:26:37 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:37.453801 | orchestrator | 2025-10-09 10:26:37 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:37.454007 | orchestrator | 2025-10-09 10:26:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:40.496740 | orchestrator | 2025-10-09 10:26:40 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:40.499067 | orchestrator | 2025-10-09 10:26:40 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:40.499098 | orchestrator | 2025-10-09 10:26:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:43.537331 | orchestrator | 2025-10-09 10:26:43 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:43.539404 | orchestrator | 2025-10-09 10:26:43 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:43.540817 | orchestrator | 2025-10-09 10:26:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:46.591520 | orchestrator | 2025-10-09 10:26:46 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:46.591696 | orchestrator | 2025-10-09 10:26:46 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:46.591717 | orchestrator | 2025-10-09 10:26:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:49.635510 | orchestrator | 2025-10-09 10:26:49 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:49.635634 | orchestrator | 2025-10-09 10:26:49 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:49.635650 | orchestrator | 2025-10-09 10:26:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:52.676365 | orchestrator | 2025-10-09 10:26:52 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:52.677795 | orchestrator | 2025-10-09 10:26:52 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:52.677825 | orchestrator | 2025-10-09 10:26:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:55.732326 | orchestrator | 2025-10-09 10:26:55 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:55.733149 | orchestrator | 2025-10-09 10:26:55 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:55.734264 | orchestrator | 2025-10-09 10:26:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:58.774506 | orchestrator | 2025-10-09 10:26:58 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:26:58.775606 | orchestrator | 2025-10-09 10:26:58 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:26:58.775718 | orchestrator | 2025-10-09 10:26:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:01.824261 | orchestrator | 2025-10-09 10:27:01 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:01.825796 | orchestrator | 2025-10-09 10:27:01 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:01.826061 | orchestrator | 2025-10-09 10:27:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:04.868978 | orchestrator | 2025-10-09 10:27:04 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:04.870889 | orchestrator | 2025-10-09 10:27:04 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:04.870934 | orchestrator | 2025-10-09 10:27:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:07.918299 | orchestrator | 2025-10-09 10:27:07 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:07.919091 | orchestrator | 2025-10-09 10:27:07 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:07.919125 | orchestrator | 2025-10-09 10:27:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:10.971751 | orchestrator | 2025-10-09 10:27:10 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:10.972591 | orchestrator | 2025-10-09 10:27:10 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:10.972623 | orchestrator | 2025-10-09 10:27:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:14.023606 | orchestrator | 2025-10-09 10:27:14 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:14.024876 | orchestrator | 2025-10-09 10:27:14 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:14.024915 | orchestrator | 2025-10-09 10:27:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:17.072222 | orchestrator | 2025-10-09 10:27:17 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:17.074355 | orchestrator | 2025-10-09 10:27:17 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:17.074465 | orchestrator | 2025-10-09 10:27:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:20.119303 | orchestrator | 2025-10-09 10:27:20 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:20.120175 | orchestrator | 2025-10-09 10:27:20 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:20.128570 | orchestrator | 2025-10-09 10:27:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:23.168320 | orchestrator | 2025-10-09 10:27:23 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:23.169449 | orchestrator | 2025-10-09 10:27:23 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:23.169473 | orchestrator | 2025-10-09 10:27:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:26.225862 | orchestrator | 2025-10-09 10:27:26 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:26.227415 | orchestrator | 2025-10-09 10:27:26 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:26.227452 | orchestrator | 2025-10-09 10:27:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:29.272816 | orchestrator | 2025-10-09 10:27:29 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:29.274395 | orchestrator | 2025-10-09 10:27:29 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:29.274453 | orchestrator | 2025-10-09 10:27:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:32.311692 | orchestrator | 2025-10-09 10:27:32 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:32.312753 | orchestrator | 2025-10-09 10:27:32 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:32.314168 | orchestrator | 2025-10-09 10:27:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:35.347745 | orchestrator | 2025-10-09 10:27:35 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:35.349380 | orchestrator | 2025-10-09 10:27:35 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:35.349681 | orchestrator | 2025-10-09 10:27:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:38.402319 | orchestrator | 2025-10-09 10:27:38 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:38.404264 | orchestrator | 2025-10-09 10:27:38 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:38.404296 | orchestrator | 2025-10-09 10:27:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:41.445998 | orchestrator | 2025-10-09 10:27:41 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:41.447469 | orchestrator | 2025-10-09 10:27:41 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:41.447501 | orchestrator | 2025-10-09 10:27:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:44.485789 | orchestrator | 2025-10-09 10:27:44 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:44.486358 | orchestrator | 2025-10-09 10:27:44 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:44.486396 | orchestrator | 2025-10-09 10:27:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:47.529582 | orchestrator | 2025-10-09 10:27:47 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:47.532491 | orchestrator | 2025-10-09 10:27:47 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:47.532534 | orchestrator | 2025-10-09 10:27:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:50.574326 | orchestrator | 2025-10-09 10:27:50 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:50.574432 | orchestrator | 2025-10-09 10:27:50 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:50.574444 | orchestrator | 2025-10-09 10:27:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:53.610480 | orchestrator | 2025-10-09 10:27:53 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:53.611529 | orchestrator | 2025-10-09 10:27:53 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:53.611562 | orchestrator | 2025-10-09 10:27:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:56.652853 | orchestrator | 2025-10-09 10:27:56 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:56.654516 | orchestrator | 2025-10-09 10:27:56 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:56.654602 | orchestrator | 2025-10-09 10:27:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:59.713983 | orchestrator | 2025-10-09 10:27:59 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:27:59.715793 | orchestrator | 2025-10-09 10:27:59 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:27:59.715826 | orchestrator | 2025-10-09 10:27:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:02.762458 | orchestrator | 2025-10-09 10:28:02 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:02.763786 | orchestrator | 2025-10-09 10:28:02 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:02.763809 | orchestrator | 2025-10-09 10:28:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:05.808926 | orchestrator | 2025-10-09 10:28:05 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:05.809412 | orchestrator | 2025-10-09 10:28:05 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:05.809464 | orchestrator | 2025-10-09 10:28:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:08.851895 | orchestrator | 2025-10-09 10:28:08 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:08.853418 | orchestrator | 2025-10-09 10:28:08 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:08.853452 | orchestrator | 2025-10-09 10:28:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:11.896982 | orchestrator | 2025-10-09 10:28:11 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:11.897964 | orchestrator | 2025-10-09 10:28:11 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:11.899348 | orchestrator | 2025-10-09 10:28:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:14.944195 | orchestrator | 2025-10-09 10:28:14 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:14.944345 | orchestrator | 2025-10-09 10:28:14 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:14.947435 | orchestrator | 2025-10-09 10:28:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:17.991387 | orchestrator | 2025-10-09 10:28:17 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:17.992183 | orchestrator | 2025-10-09 10:28:17 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:17.992199 | orchestrator | 2025-10-09 10:28:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:21.040530 | orchestrator | 2025-10-09 10:28:21 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:21.042581 | orchestrator | 2025-10-09 10:28:21 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:21.042656 | orchestrator | 2025-10-09 10:28:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:24.082164 | orchestrator | 2025-10-09 10:28:24 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:24.083117 | orchestrator | 2025-10-09 10:28:24 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:24.083150 | orchestrator | 2025-10-09 10:28:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:27.135848 | orchestrator | 2025-10-09 10:28:27 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:27.136366 | orchestrator | 2025-10-09 10:28:27 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:27.136393 | orchestrator | 2025-10-09 10:28:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:30.194317 | orchestrator | 2025-10-09 10:28:30 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:30.194483 | orchestrator | 2025-10-09 10:28:30 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:30.194501 | orchestrator | 2025-10-09 10:28:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:33.242174 | orchestrator | 2025-10-09 10:28:33 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:33.244561 | orchestrator | 2025-10-09 10:28:33 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:33.244653 | orchestrator | 2025-10-09 10:28:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:36.294859 | orchestrator | 2025-10-09 10:28:36 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:36.295981 | orchestrator | 2025-10-09 10:28:36 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:36.296313 | orchestrator | 2025-10-09 10:28:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:39.339305 | orchestrator | 2025-10-09 10:28:39 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:39.340201 | orchestrator | 2025-10-09 10:28:39 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:39.340245 | orchestrator | 2025-10-09 10:28:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:42.375219 | orchestrator | 2025-10-09 10:28:42 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:42.376769 | orchestrator | 2025-10-09 10:28:42 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:42.376862 | orchestrator | 2025-10-09 10:28:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:45.419973 | orchestrator | 2025-10-09 10:28:45 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:45.420391 | orchestrator | 2025-10-09 10:28:45 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:45.420507 | orchestrator | 2025-10-09 10:28:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:48.458455 | orchestrator | 2025-10-09 10:28:48 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:48.460524 | orchestrator | 2025-10-09 10:28:48 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:48.460604 | orchestrator | 2025-10-09 10:28:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:51.505911 | orchestrator | 2025-10-09 10:28:51 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:51.506728 | orchestrator | 2025-10-09 10:28:51 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:51.506851 | orchestrator | 2025-10-09 10:28:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:54.553636 | orchestrator | 2025-10-09 10:28:54 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:54.553850 | orchestrator | 2025-10-09 10:28:54 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:54.553873 | orchestrator | 2025-10-09 10:28:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:57.613040 | orchestrator | 2025-10-09 10:28:57 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:28:57.616639 | orchestrator | 2025-10-09 10:28:57 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:28:57.616708 | orchestrator | 2025-10-09 10:28:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:00.665285 | orchestrator | 2025-10-09 10:29:00 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:00.666975 | orchestrator | 2025-10-09 10:29:00 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:29:00.667094 | orchestrator | 2025-10-09 10:29:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:03.711153 | orchestrator | 2025-10-09 10:29:03 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:03.714528 | orchestrator | 2025-10-09 10:29:03 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:29:03.714561 | orchestrator | 2025-10-09 10:29:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:06.760923 | orchestrator | 2025-10-09 10:29:06 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:06.761799 | orchestrator | 2025-10-09 10:29:06 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state STARTED 2025-10-09 10:29:06.761992 | orchestrator | 2025-10-09 10:29:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:09.831406 | orchestrator | 2025-10-09 10:29:09 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:09.835978 | orchestrator | 2025-10-09 10:29:09 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:09.851197 | orchestrator | 2025-10-09 10:29:09 | INFO  | Task 5a5e249b-db82-4cad-b19d-280c4a3adc96 is in state SUCCESS 2025-10-09 10:29:09.854319 | orchestrator | 2025-10-09 10:29:09.854355 | orchestrator | 2025-10-09 10:29:09.854368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:29:09.854381 | orchestrator | 2025-10-09 10:29:09.854392 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:29:09.854405 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.273) 0:00:00.273 ****** 2025-10-09 10:29:09.854417 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.854429 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.854477 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.854491 | orchestrator | 2025-10-09 10:29:09.854502 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:29:09.854513 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.322) 0:00:00.595 ****** 2025-10-09 10:29:09.854525 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-10-09 10:29:09.854537 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-10-09 10:29:09.854548 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-10-09 10:29:09.854559 | orchestrator | 2025-10-09 10:29:09.854647 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-10-09 10:29:09.854658 | orchestrator | 2025-10-09 10:29:09.854669 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-10-09 10:29:09.854680 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.529) 0:00:01.124 ****** 2025-10-09 10:29:09.854813 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.854829 | orchestrator | 2025-10-09 10:29:09.854841 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-10-09 10:29:09.854879 | orchestrator | Thursday 09 October 2025 10:22:03 +0000 (0:00:00.771) 0:00:01.896 ****** 2025-10-09 10:29:09.854892 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.854903 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.854914 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.854925 | orchestrator | 2025-10-09 10:29:09.854939 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-10-09 10:29:09.854974 | orchestrator | Thursday 09 October 2025 10:22:04 +0000 (0:00:00.731) 0:00:02.628 ****** 2025-10-09 10:29:09.854987 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.854999 | orchestrator | 2025-10-09 10:29:09.855012 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-10-09 10:29:09.855024 | orchestrator | Thursday 09 October 2025 10:22:05 +0000 (0:00:01.196) 0:00:03.824 ****** 2025-10-09 10:29:09.855036 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.855048 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.855060 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.855072 | orchestrator | 2025-10-09 10:29:09.855111 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-10-09 10:29:09.855126 | orchestrator | Thursday 09 October 2025 10:22:06 +0000 (0:00:00.646) 0:00:04.471 ****** 2025-10-09 10:29:09.855139 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:29:09.855151 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:29:09.855216 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:29:09.855249 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:29:09.855263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:29:09.855277 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:29:09.855288 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-09 10:29:09.855301 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-09 10:29:09.855313 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-09 10:29:09.855324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-09 10:29:09.855336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-09 10:29:09.855348 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-09 10:29:09.855359 | orchestrator | 2025-10-09 10:29:09.855398 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-09 10:29:09.855409 | orchestrator | Thursday 09 October 2025 10:22:09 +0000 (0:00:02.874) 0:00:07.345 ****** 2025-10-09 10:29:09.855420 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-10-09 10:29:09.855521 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-10-09 10:29:09.855563 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-10-09 10:29:09.855574 | orchestrator | 2025-10-09 10:29:09.855585 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-09 10:29:09.855596 | orchestrator | Thursday 09 October 2025 10:22:09 +0000 (0:00:00.799) 0:00:08.145 ****** 2025-10-09 10:29:09.855607 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-10-09 10:29:09.855617 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-10-09 10:29:09.855628 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-10-09 10:29:09.855639 | orchestrator | 2025-10-09 10:29:09.855649 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-09 10:29:09.855660 | orchestrator | Thursday 09 October 2025 10:22:12 +0000 (0:00:02.130) 0:00:10.275 ****** 2025-10-09 10:29:09.855671 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-10-09 10:29:09.855682 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.855706 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-10-09 10:29:09.855718 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.855728 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-10-09 10:29:09.855752 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.855763 | orchestrator | 2025-10-09 10:29:09.855854 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-10-09 10:29:09.855866 | orchestrator | Thursday 09 October 2025 10:22:12 +0000 (0:00:00.796) 0:00:11.072 ****** 2025-10-09 10:29:09.855881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.855908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.855920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.855958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.855971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.855992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.856015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.856033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.856045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.856087 | orchestrator | 2025-10-09 10:29:09.856099 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-10-09 10:29:09.856162 | orchestrator | Thursday 09 October 2025 10:22:15 +0000 (0:00:02.473) 0:00:13.545 ****** 2025-10-09 10:29:09.856204 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.856216 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.856259 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.856271 | orchestrator | 2025-10-09 10:29:09.856281 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-10-09 10:29:09.856292 | orchestrator | Thursday 09 October 2025 10:22:16 +0000 (0:00:01.162) 0:00:14.708 ****** 2025-10-09 10:29:09.856303 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-10-09 10:29:09.856315 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-10-09 10:29:09.856326 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-10-09 10:29:09.856336 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-10-09 10:29:09.856347 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-10-09 10:29:09.856385 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-10-09 10:29:09.856396 | orchestrator | 2025-10-09 10:29:09.856407 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-10-09 10:29:09.856418 | orchestrator | Thursday 09 October 2025 10:22:19 +0000 (0:00:02.685) 0:00:17.393 ****** 2025-10-09 10:29:09.856429 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.856440 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.856451 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.856462 | orchestrator | 2025-10-09 10:29:09.856473 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-10-09 10:29:09.856483 | orchestrator | Thursday 09 October 2025 10:22:21 +0000 (0:00:02.119) 0:00:19.512 ****** 2025-10-09 10:29:09.856494 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.856505 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.856516 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.856527 | orchestrator | 2025-10-09 10:29:09.856538 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-10-09 10:29:09.856557 | orchestrator | Thursday 09 October 2025 10:22:23 +0000 (0:00:02.493) 0:00:22.006 ****** 2025-10-09 10:29:09.856569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.856640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.856656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.856673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20250928', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:29:09.856686 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.856697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.856709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.856727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.856744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20250928', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:29:09.856756 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.856768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.856784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.856795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.856808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20250928', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:29:09.856826 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.856837 | orchestrator | 2025-10-09 10:29:09.856848 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-10-09 10:29:09.856859 | orchestrator | Thursday 09 October 2025 10:22:25 +0000 (0:00:01.952) 0:00:23.958 ****** 2025-10-09 10:29:09.856964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.856984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.856996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.857037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20250928', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:29:09.857055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.857086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20250928', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:29:09.857097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.857125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20250928', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb', '__omit_place_holder__c9659deab289367b4b81beefb465e2e37a8900eb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:29:09.857149 | orchestrator | 2025-10-09 10:29:09.857160 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-10-09 10:29:09.857171 | orchestrator | Thursday 09 October 2025 10:22:30 +0000 (0:00:05.181) 0:00:29.140 ****** 2025-10-09 10:29:09.857182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.857394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.857404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.857414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.857424 | orchestrator | 2025-10-09 10:29:09.857434 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-10-09 10:29:09.857444 | orchestrator | Thursday 09 October 2025 10:22:34 +0000 (0:00:03.648) 0:00:32.789 ****** 2025-10-09 10:29:09.857453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-09 10:29:09.858947 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-09 10:29:09.858980 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-09 10:29:09.858990 | orchestrator | 2025-10-09 10:29:09.859001 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-10-09 10:29:09.859011 | orchestrator | Thursday 09 October 2025 10:22:38 +0000 (0:00:03.521) 0:00:36.311 ****** 2025-10-09 10:29:09.859021 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-09 10:29:09.859031 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-09 10:29:09.859040 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-09 10:29:09.859050 | orchestrator | 2025-10-09 10:29:09.859059 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-10-09 10:29:09.859069 | orchestrator | Thursday 09 October 2025 10:22:45 +0000 (0:00:07.246) 0:00:43.558 ****** 2025-10-09 10:29:09.859078 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.859088 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.859098 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.859108 | orchestrator | 2025-10-09 10:29:09.859117 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-10-09 10:29:09.859132 | orchestrator | Thursday 09 October 2025 10:22:46 +0000 (0:00:00.994) 0:00:44.552 ****** 2025-10-09 10:29:09.859142 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-09 10:29:09.859168 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-09 10:29:09.859177 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-09 10:29:09.859187 | orchestrator | 2025-10-09 10:29:09.859197 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-10-09 10:29:09.859206 | orchestrator | Thursday 09 October 2025 10:22:49 +0000 (0:00:02.871) 0:00:47.424 ****** 2025-10-09 10:29:09.859216 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-09 10:29:09.859284 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-09 10:29:09.859295 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-09 10:29:09.859305 | orchestrator | 2025-10-09 10:29:09.859315 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-10-09 10:29:09.859324 | orchestrator | Thursday 09 October 2025 10:22:53 +0000 (0:00:04.383) 0:00:51.807 ****** 2025-10-09 10:29:09.859334 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-10-09 10:29:09.859344 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-10-09 10:29:09.859353 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-10-09 10:29:09.859363 | orchestrator | 2025-10-09 10:29:09.859372 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-10-09 10:29:09.859382 | orchestrator | Thursday 09 October 2025 10:22:55 +0000 (0:00:02.380) 0:00:54.188 ****** 2025-10-09 10:29:09.859392 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-10-09 10:29:09.859401 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-10-09 10:29:09.859411 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-10-09 10:29:09.859420 | orchestrator | 2025-10-09 10:29:09.859430 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-10-09 10:29:09.859439 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:02.727) 0:00:56.916 ****** 2025-10-09 10:29:09.859449 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.859459 | orchestrator | 2025-10-09 10:29:09.859468 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-10-09 10:29:09.859478 | orchestrator | Thursday 09 October 2025 10:23:00 +0000 (0:00:01.400) 0:00:58.317 ****** 2025-10-09 10:29:09.859489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.859510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.859529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.859541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.859554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.859566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.859578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.859590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.859632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.859651 | orchestrator | 2025-10-09 10:29:09.859662 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-10-09 10:29:09.859673 | orchestrator | Thursday 09 October 2025 10:23:04 +0000 (0:00:04.598) 0:01:02.915 ****** 2025-10-09 10:29:09.859689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.859700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.859709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.859718 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.859728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.859771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.859788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.859804 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.859814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.859827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.859856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.859866 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.859876 | orchestrator | 2025-10-09 10:29:09.859921 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-10-09 10:29:09.859931 | orchestrator | Thursday 09 October 2025 10:23:06 +0000 (0:00:02.085) 0:01:05.001 ****** 2025-10-09 10:29:09.859940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.859948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.859967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.859976 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.859984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.859996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860058 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.860066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860097 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.860105 | orchestrator | 2025-10-09 10:29:09.860113 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-10-09 10:29:09.860121 | orchestrator | Thursday 09 October 2025 10:23:10 +0000 (0:00:04.103) 0:01:09.104 ****** 2025-10-09 10:29:09.860134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860165 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.860173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860205 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.860217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860261 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.860269 | orchestrator | 2025-10-09 10:29:09.860277 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-10-09 10:29:09.860285 | orchestrator | Thursday 09 October 2025 10:23:13 +0000 (0:00:02.512) 0:01:11.617 ****** 2025-10-09 10:29:09.860293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860325 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.860333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860366 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.860374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860406 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.860414 | orchestrator | 2025-10-09 10:29:09.860423 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-10-09 10:29:09.860430 | orchestrator | Thursday 09 October 2025 10:23:14 +0000 (0:00:01.022) 0:01:12.640 ****** 2025-10-09 10:29:09.860438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860469 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.860481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860511 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.860519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860548 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.860555 | orchestrator | 2025-10-09 10:29:09.860563 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-10-09 10:29:09.860571 | orchestrator | Thursday 09 October 2025 10:23:15 +0000 (0:00:00.987) 0:01:13.627 ****** 2025-10-09 10:29:09.860583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860613 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.860621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860653 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.860664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860695 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.860703 | orchestrator | 2025-10-09 10:29:09.860711 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-10-09 10:29:09.860719 | orchestrator | Thursday 09 October 2025 10:23:16 +0000 (0:00:00.806) 0:01:14.434 ****** 2025-10-09 10:29:09.860727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860756 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.860764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860798 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.860806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860831 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.860838 | orchestrator | 2025-10-09 10:29:09.860846 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-10-09 10:29:09.860877 | orchestrator | Thursday 09 October 2025 10:23:16 +0000 (0:00:00.620) 0:01:15.054 ****** 2025-10-09 10:29:09.860891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.860905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.860918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.860989 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.860998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.861006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.861014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.861023 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.861036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:29:09.861045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:29:09.861063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:29:09.861072 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.861080 | orchestrator | 2025-10-09 10:29:09.861088 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-10-09 10:29:09.861096 | orchestrator | Thursday 09 October 2025 10:23:18 +0000 (0:00:01.264) 0:01:16.318 ****** 2025-10-09 10:29:09.861103 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-09 10:29:09.861111 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-09 10:29:09.861119 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-09 10:29:09.861127 | orchestrator | 2025-10-09 10:29:09.861135 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-10-09 10:29:09.861143 | orchestrator | Thursday 09 October 2025 10:23:21 +0000 (0:00:03.839) 0:01:20.158 ****** 2025-10-09 10:29:09.861150 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-09 10:29:09.861158 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-09 10:29:09.861166 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-09 10:29:09.861174 | orchestrator | 2025-10-09 10:29:09.861182 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-10-09 10:29:09.861190 | orchestrator | Thursday 09 October 2025 10:23:24 +0000 (0:00:02.520) 0:01:22.679 ****** 2025-10-09 10:29:09.861198 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:29:09.861206 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:29:09.861213 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:29:09.861221 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.861244 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:29:09.861293 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:29:09.861302 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.861310 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:29:09.861317 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.861325 | orchestrator | 2025-10-09 10:29:09.861333 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-10-09 10:29:09.861341 | orchestrator | Thursday 09 October 2025 10:23:26 +0000 (0:00:01.984) 0:01:24.663 ****** 2025-10-09 10:29:09.861354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.861368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.861381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.5.20250928', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:29:09.861390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.861398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.861407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250928', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:29:09.861415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.861435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.861444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20250928', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:29:09.861452 | orchestrator | 2025-10-09 10:29:09.861460 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-10-09 10:29:09.861468 | orchestrator | Thursday 09 October 2025 10:23:30 +0000 (0:00:03.604) 0:01:28.267 ****** 2025-10-09 10:29:09.861480 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.861488 | orchestrator | 2025-10-09 10:29:09.861496 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-10-09 10:29:09.861503 | orchestrator | Thursday 09 October 2025 10:23:30 +0000 (0:00:00.857) 0:01:29.125 ****** 2025-10-09 10:29:09.861533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-09 10:29:09.861542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.861551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-09 10:29:09.865643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-09 10:29:09.865730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.865738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.865744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865800 | orchestrator | 2025-10-09 10:29:09.865809 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-10-09 10:29:09.865815 | orchestrator | Thursday 09 October 2025 10:23:38 +0000 (0:00:07.532) 0:01:36.657 ****** 2025-10-09 10:29:09.865822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-09 10:29:09.865829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.865835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865852 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.865864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-09 10:29:09.865873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.865879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865892 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.865899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-09 10:29:09.865918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.865928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250928', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.865943 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.865950 | orchestrator | 2025-10-09 10:29:09.865956 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-10-09 10:29:09.866002 | orchestrator | Thursday 09 October 2025 10:23:40 +0000 (0:00:02.541) 0:01:39.199 ****** 2025-10-09 10:29:09.866008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:29:09.866047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:29:09.866055 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.866061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:29:09.866068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:29:09.866075 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.866081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:29:09.866087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:29:09.866099 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.866105 | orchestrator | 2025-10-09 10:29:09.866112 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-10-09 10:29:09.866118 | orchestrator | Thursday 09 October 2025 10:23:42 +0000 (0:00:01.424) 0:01:40.624 ****** 2025-10-09 10:29:09.866124 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.866130 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.866136 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.866142 | orchestrator | 2025-10-09 10:29:09.866148 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-10-09 10:29:09.866154 | orchestrator | Thursday 09 October 2025 10:23:44 +0000 (0:00:01.882) 0:01:42.506 ****** 2025-10-09 10:29:09.866161 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.866167 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.866173 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.866179 | orchestrator | 2025-10-09 10:29:09.866185 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-10-09 10:29:09.866191 | orchestrator | Thursday 09 October 2025 10:23:46 +0000 (0:00:02.548) 0:01:45.055 ****** 2025-10-09 10:29:09.866198 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.866204 | orchestrator | 2025-10-09 10:29:09.866210 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-10-09 10:29:09.866216 | orchestrator | Thursday 09 October 2025 10:23:47 +0000 (0:00:00.861) 0:01:45.916 ****** 2025-10-09 10:29:09.866308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.866318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.866351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.866378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866409 | orchestrator | 2025-10-09 10:29:09.866416 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-10-09 10:29:09.866423 | orchestrator | Thursday 09 October 2025 10:23:54 +0000 (0:00:06.517) 0:01:52.434 ****** 2025-10-09 10:29:09.866431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.866438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866458 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.866468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.866476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866494 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.866501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.866512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.866527 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.866534 | orchestrator | 2025-10-09 10:29:09.866542 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-10-09 10:29:09.866549 | orchestrator | Thursday 09 October 2025 10:23:54 +0000 (0:00:00.655) 0:01:53.090 ****** 2025-10-09 10:29:09.866556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:29:09.866572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:29:09.866580 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.866587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:29:09.866594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:29:09.866601 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.866608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:29:09.866615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:29:09.866621 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.866628 | orchestrator | 2025-10-09 10:29:09.866634 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-10-09 10:29:09.866640 | orchestrator | Thursday 09 October 2025 10:23:56 +0000 (0:00:01.159) 0:01:54.249 ****** 2025-10-09 10:29:09.866646 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.866652 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.866659 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.866665 | orchestrator | 2025-10-09 10:29:09.866671 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-10-09 10:29:09.866677 | orchestrator | Thursday 09 October 2025 10:23:57 +0000 (0:00:01.384) 0:01:55.634 ****** 2025-10-09 10:29:09.866683 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.866689 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.866695 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.866701 | orchestrator | 2025-10-09 10:29:09.866707 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-10-09 10:29:09.866714 | orchestrator | Thursday 09 October 2025 10:23:59 +0000 (0:00:02.117) 0:01:57.751 ****** 2025-10-09 10:29:09.866720 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.866726 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.866732 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.866738 | orchestrator | 2025-10-09 10:29:09.866744 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-10-09 10:29:09.866750 | orchestrator | Thursday 09 October 2025 10:23:59 +0000 (0:00:00.310) 0:01:58.062 ****** 2025-10-09 10:29:09.866757 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.866763 | orchestrator | 2025-10-09 10:29:09.866769 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-10-09 10:29:09.866775 | orchestrator | Thursday 09 October 2025 10:24:00 +0000 (0:00:00.899) 0:01:58.961 ****** 2025-10-09 10:29:09.866784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-09 10:29:09.866798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-09 10:29:09.866805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-09 10:29:09.866811 | orchestrator | 2025-10-09 10:29:09.866818 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-10-09 10:29:09.866824 | orchestrator | Thursday 09 October 2025 10:24:03 +0000 (0:00:02.816) 0:02:01.778 ****** 2025-10-09 10:29:09.866830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-09 10:29:09.866837 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.866843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-09 10:29:09.866849 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.866863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-09 10:29:09.866869 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.866876 | orchestrator | 2025-10-09 10:29:09.866882 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-10-09 10:29:09.866888 | orchestrator | Thursday 09 October 2025 10:24:05 +0000 (0:00:01.623) 0:02:03.401 ****** 2025-10-09 10:29:09.866897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:29:09.866905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:29:09.866912 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.866919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:29:09.866925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:29:09.866932 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.866938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:29:09.866945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:29:09.866951 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.866960 | orchestrator | 2025-10-09 10:29:09.866967 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-10-09 10:29:09.866973 | orchestrator | Thursday 09 October 2025 10:24:06 +0000 (0:00:01.689) 0:02:05.091 ****** 2025-10-09 10:29:09.866979 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.866985 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.866991 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.866997 | orchestrator | 2025-10-09 10:29:09.867003 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-10-09 10:29:09.867009 | orchestrator | Thursday 09 October 2025 10:24:07 +0000 (0:00:00.645) 0:02:05.736 ****** 2025-10-09 10:29:09.867015 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.867022 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.867028 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.867034 | orchestrator | 2025-10-09 10:29:09.867040 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-10-09 10:29:09.867049 | orchestrator | Thursday 09 October 2025 10:24:08 +0000 (0:00:01.270) 0:02:07.007 ****** 2025-10-09 10:29:09.867055 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.867061 | orchestrator | 2025-10-09 10:29:09.867067 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-10-09 10:29:09.867073 | orchestrator | Thursday 09 October 2025 10:24:09 +0000 (0:00:00.786) 0:02:07.793 ****** 2025-10-09 10:29:09.867082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.867089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.867123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.867146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867181 | orchestrator | 2025-10-09 10:29:09.867187 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-10-09 10:29:09.867193 | orchestrator | Thursday 09 October 2025 10:24:13 +0000 (0:00:04.384) 0:02:12.178 ****** 2025-10-09 10:29:09.867200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.867209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867247 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.867256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.867263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867286 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.867296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.867307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.867330 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.867337 | orchestrator | 2025-10-09 10:29:09.867343 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-10-09 10:29:09.867349 | orchestrator | Thursday 09 October 2025 10:24:15 +0000 (0:00:01.504) 0:02:13.683 ****** 2025-10-09 10:29:09.867355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:29:09.867362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:29:09.867368 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.867374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:29:09.867381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:29:09.867387 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.867393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:29:09.867403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:29:09.867410 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.867416 | orchestrator | 2025-10-09 10:29:09.867422 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-10-09 10:29:09.867428 | orchestrator | Thursday 09 October 2025 10:24:16 +0000 (0:00:01.018) 0:02:14.701 ****** 2025-10-09 10:29:09.867434 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.867441 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.867447 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.867453 | orchestrator | 2025-10-09 10:29:09.867459 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-10-09 10:29:09.867465 | orchestrator | Thursday 09 October 2025 10:24:17 +0000 (0:00:01.420) 0:02:16.122 ****** 2025-10-09 10:29:09.867471 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.867478 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.867484 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.867490 | orchestrator | 2025-10-09 10:29:09.867496 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-10-09 10:29:09.867502 | orchestrator | Thursday 09 October 2025 10:24:20 +0000 (0:00:02.353) 0:02:18.475 ****** 2025-10-09 10:29:09.867508 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.867518 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.867525 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.867531 | orchestrator | 2025-10-09 10:29:09.867540 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-10-09 10:29:09.867547 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:00.855) 0:02:19.331 ****** 2025-10-09 10:29:09.867553 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.867559 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.867565 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.867571 | orchestrator | 2025-10-09 10:29:09.867577 | orchestrator | TASK [include_role : designate] ************************************************ 2025-10-09 10:29:09.867583 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:00.378) 0:02:19.709 ****** 2025-10-09 10:29:09.867590 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.867596 | orchestrator | 2025-10-09 10:29:09.867602 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-10-09 10:29:09.867608 | orchestrator | Thursday 09 October 2025 10:24:22 +0000 (0:00:00.961) 0:02:20.671 ****** 2025-10-09 10:29:09.867614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:29:09.867622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:29:09.867628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:29:09.869076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:29:09.869089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:29:09.869139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:29:09.869145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869183 | orchestrator | 2025-10-09 10:29:09.869190 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-10-09 10:29:09.869196 | orchestrator | Thursday 09 October 2025 10:24:27 +0000 (0:00:05.319) 0:02:25.991 ****** 2025-10-09 10:29:09.869207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:29:09.869216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:29:09.869271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:29:09.869302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:29:09.869313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869341 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.869348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:29:09.869377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869384 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.869391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:29:09.869397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.869436 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.869442 | orchestrator | 2025-10-09 10:29:09.869449 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-10-09 10:29:09.869457 | orchestrator | Thursday 09 October 2025 10:24:28 +0000 (0:00:01.190) 0:02:27.181 ****** 2025-10-09 10:29:09.869464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:29:09.869471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:29:09.869478 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.869484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:29:09.869490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:29:09.869496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:29:09.869503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:29:09.869509 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.869515 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.869521 | orchestrator | 2025-10-09 10:29:09.869527 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-10-09 10:29:09.869534 | orchestrator | Thursday 09 October 2025 10:24:30 +0000 (0:00:01.135) 0:02:28.316 ****** 2025-10-09 10:29:09.869540 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.869546 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.869552 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.869558 | orchestrator | 2025-10-09 10:29:09.869565 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-10-09 10:29:09.869571 | orchestrator | Thursday 09 October 2025 10:24:32 +0000 (0:00:01.923) 0:02:30.240 ****** 2025-10-09 10:29:09.869583 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.869589 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.869596 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.869602 | orchestrator | 2025-10-09 10:29:09.869608 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-10-09 10:29:09.869614 | orchestrator | Thursday 09 October 2025 10:24:33 +0000 (0:00:01.942) 0:02:32.183 ****** 2025-10-09 10:29:09.869620 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.869626 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.869632 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.869638 | orchestrator | 2025-10-09 10:29:09.869645 | orchestrator | TASK [include_role : glance] *************************************************** 2025-10-09 10:29:09.869651 | orchestrator | Thursday 09 October 2025 10:24:34 +0000 (0:00:00.616) 0:02:32.800 ****** 2025-10-09 10:29:09.869657 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.869663 | orchestrator | 2025-10-09 10:29:09.869669 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-10-09 10:29:09.869675 | orchestrator | Thursday 09 October 2025 10:24:35 +0000 (0:00:00.898) 0:02:33.698 ****** 2025-10-09 10:29:09.869690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:29:09.869698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250928', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.869711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:29:09.869720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250928', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.869733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:29:09.869742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250928', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.869752 | orchestrator | 2025-10-09 10:29:09.869758 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-10-09 10:29:09.869763 | orchestrator | Thursday 09 October 2025 10:24:40 +0000 (0:00:04.856) 0:02:38.555 ****** 2025-10-09 10:29:09.869772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:29:09.869779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250928', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.869785 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.869791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:29:09.869818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250928', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.869825 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.869831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:29:09.869846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250928', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.869852 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.869857 | orchestrator | 2025-10-09 10:29:09.869863 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-10-09 10:29:09.869868 | orchestrator | Thursday 09 October 2025 10:24:44 +0000 (0:00:04.168) 0:02:42.724 ****** 2025-10-09 10:29:09.869874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:29:09.869883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:29:09.869889 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.869895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:29:09.869900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:29:09.869906 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.869912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:29:09.869921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:29:09.869927 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.869932 | orchestrator | 2025-10-09 10:29:09.869938 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-10-09 10:29:09.869943 | orchestrator | Thursday 09 October 2025 10:24:47 +0000 (0:00:03.376) 0:02:46.100 ****** 2025-10-09 10:29:09.869949 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.869954 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.869959 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.869965 | orchestrator | 2025-10-09 10:29:09.869970 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-10-09 10:29:09.869975 | orchestrator | Thursday 09 October 2025 10:24:49 +0000 (0:00:01.366) 0:02:47.467 ****** 2025-10-09 10:29:09.869981 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.869986 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.869991 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.869997 | orchestrator | 2025-10-09 10:29:09.870004 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-10-09 10:29:09.870010 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:02.261) 0:02:49.728 ****** 2025-10-09 10:29:09.870061 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.870069 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.870075 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.870080 | orchestrator | 2025-10-09 10:29:09.870085 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-10-09 10:29:09.870091 | orchestrator | Thursday 09 October 2025 10:24:52 +0000 (0:00:00.580) 0:02:50.309 ****** 2025-10-09 10:29:09.870096 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.870101 | orchestrator | 2025-10-09 10:29:09.870107 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-10-09 10:29:09.870112 | orchestrator | Thursday 09 October 2025 10:24:53 +0000 (0:00:00.940) 0:02:51.249 ****** 2025-10-09 10:29:09.870118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:29:09.870124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:29:09.870130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:29:09.870136 | orchestrator | 2025-10-09 10:29:09.870141 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-10-09 10:29:09.870147 | orchestrator | Thursday 09 October 2025 10:24:56 +0000 (0:00:03.401) 0:02:54.651 ****** 2025-10-09 10:29:09.870157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:29:09.870163 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.870177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:29:09.870183 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.870189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:29:09.870194 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.870200 | orchestrator | 2025-10-09 10:29:09.870205 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-10-09 10:29:09.870211 | orchestrator | Thursday 09 October 2025 10:24:57 +0000 (0:00:00.736) 0:02:55.387 ****** 2025-10-09 10:29:09.870216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:29:09.870222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:29:09.870240 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.870245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:29:09.870251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:29:09.870256 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.870262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:29:09.870267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:29:09.870273 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.870278 | orchestrator | 2025-10-09 10:29:09.870284 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-10-09 10:29:09.870289 | orchestrator | Thursday 09 October 2025 10:24:57 +0000 (0:00:00.791) 0:02:56.179 ****** 2025-10-09 10:29:09.870294 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.870300 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.870305 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.870311 | orchestrator | 2025-10-09 10:29:09.870316 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-10-09 10:29:09.870322 | orchestrator | Thursday 09 October 2025 10:24:59 +0000 (0:00:01.534) 0:02:57.714 ****** 2025-10-09 10:29:09.870331 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.870337 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.870342 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.870348 | orchestrator | 2025-10-09 10:29:09.870353 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-10-09 10:29:09.870400 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:02.415) 0:03:00.129 ****** 2025-10-09 10:29:09.870406 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.870411 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.870417 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.870422 | orchestrator | 2025-10-09 10:29:09.870428 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-10-09 10:29:09.870433 | orchestrator | Thursday 09 October 2025 10:25:02 +0000 (0:00:00.581) 0:03:00.711 ****** 2025-10-09 10:29:09.870438 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.870444 | orchestrator | 2025-10-09 10:29:09.870449 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-10-09 10:29:09.870454 | orchestrator | Thursday 09 October 2025 10:25:03 +0000 (0:00:01.042) 0:03:01.754 ****** 2025-10-09 10:29:09.870463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:29:09.870474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:29:09.870487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:29:09.870493 | orchestrator | 2025-10-09 10:29:09.870498 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-10-09 10:29:09.870504 | orchestrator | Thursday 09 October 2025 10:25:09 +0000 (0:00:05.546) 0:03:07.301 ****** 2025-10-09 10:29:09.870517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:29:09.870529 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.870534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:29:09.870545 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.870558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:29:09.870564 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.870569 | orchestrator | 2025-10-09 10:29:09.870575 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-10-09 10:29:09.870580 | orchestrator | Thursday 09 October 2025 10:25:10 +0000 (0:00:01.087) 0:03:08.388 ****** 2025-10-09 10:29:09.870586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:29:09.870591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:29:09.870597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:29:09.870603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:29:09.870613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:29:09.870619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-09 10:29:09.870625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:29:09.870631 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.870656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:29:09.870662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:29:09.870668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-09 10:29:09.870673 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.870681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:29:09.870687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:29:09.870692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:29:09.870698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:29:09.870703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-09 10:29:09.870709 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.870714 | orchestrator | 2025-10-09 10:29:09.870720 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-10-09 10:29:09.870731 | orchestrator | Thursday 09 October 2025 10:25:11 +0000 (0:00:01.249) 0:03:09.637 ****** 2025-10-09 10:29:09.870736 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.870742 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.870747 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.870752 | orchestrator | 2025-10-09 10:29:09.870758 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-10-09 10:29:09.870763 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:01.296) 0:03:10.934 ****** 2025-10-09 10:29:09.870768 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.870774 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.870779 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.870785 | orchestrator | 2025-10-09 10:29:09.870790 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-10-09 10:29:09.870796 | orchestrator | Thursday 09 October 2025 10:25:14 +0000 (0:00:01.980) 0:03:12.914 ****** 2025-10-09 10:29:09.870801 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.870806 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.870812 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.870817 | orchestrator | 2025-10-09 10:29:09.870823 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-10-09 10:29:09.870828 | orchestrator | Thursday 09 October 2025 10:25:14 +0000 (0:00:00.293) 0:03:13.208 ****** 2025-10-09 10:29:09.870833 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.870839 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.870844 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.870850 | orchestrator | 2025-10-09 10:29:09.870855 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-10-09 10:29:09.870860 | orchestrator | Thursday 09 October 2025 10:25:15 +0000 (0:00:00.445) 0:03:13.653 ****** 2025-10-09 10:29:09.870866 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.870871 | orchestrator | 2025-10-09 10:29:09.870877 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-10-09 10:29:09.870882 | orchestrator | Thursday 09 October 2025 10:25:16 +0000 (0:00:00.947) 0:03:14.601 ****** 2025-10-09 10:29:09.870900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:29:09.870910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:29:09.870920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:29:09.870926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:29:09.870932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:29:09.870942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:29:09.870952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:29:09.870958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:29:09.870968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:29:09.870973 | orchestrator | 2025-10-09 10:29:09.870997 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-10-09 10:29:09.871011 | orchestrator | Thursday 09 October 2025 10:25:19 +0000 (0:00:03.246) 0:03:17.847 ****** 2025-10-09 10:29:09.871017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:29:09.871034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:29:09.871040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:29:09.871046 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.871054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:29:09.871064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:29:09.871070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:29:09.871075 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.871085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:29:09.871091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:29:09.871098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:29:09.871109 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.871114 | orchestrator | 2025-10-09 10:29:09.871120 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-10-09 10:29:09.871125 | orchestrator | Thursday 09 October 2025 10:25:20 +0000 (0:00:00.951) 0:03:18.798 ****** 2025-10-09 10:29:09.871131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:29:09.871136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:29:09.871142 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.871148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:29:09.871153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:29:09.871159 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.871164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:29:09.871170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:29:09.871175 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.871181 | orchestrator | 2025-10-09 10:29:09.871186 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-10-09 10:29:09.871191 | orchestrator | Thursday 09 October 2025 10:25:21 +0000 (0:00:00.879) 0:03:19.678 ****** 2025-10-09 10:29:09.871197 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.871202 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.871207 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.871213 | orchestrator | 2025-10-09 10:29:09.871218 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-10-09 10:29:09.871224 | orchestrator | Thursday 09 October 2025 10:25:22 +0000 (0:00:01.290) 0:03:20.968 ****** 2025-10-09 10:29:09.871241 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.871247 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.871252 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.871258 | orchestrator | 2025-10-09 10:29:09.871263 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-10-09 10:29:09.871268 | orchestrator | Thursday 09 October 2025 10:25:24 +0000 (0:00:01.927) 0:03:22.895 ****** 2025-10-09 10:29:09.871274 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.871290 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.871299 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.871304 | orchestrator | 2025-10-09 10:29:09.871310 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-10-09 10:29:09.871315 | orchestrator | Thursday 09 October 2025 10:25:25 +0000 (0:00:00.489) 0:03:23.385 ****** 2025-10-09 10:29:09.871321 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.871326 | orchestrator | 2025-10-09 10:29:09.871331 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-10-09 10:29:09.871337 | orchestrator | Thursday 09 October 2025 10:25:26 +0000 (0:00:00.992) 0:03:24.378 ****** 2025-10-09 10:29:09.871345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:29:09.871351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:29:09.871371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:29:09.871392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871406 | orchestrator | 2025-10-09 10:29:09.871412 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-10-09 10:29:09.871417 | orchestrator | Thursday 09 October 2025 10:25:29 +0000 (0:00:03.214) 0:03:27.593 ****** 2025-10-09 10:29:09.871423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:29:09.871429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871434 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.871443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:29:09.871460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871466 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.871474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:29:09.871480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871486 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.871491 | orchestrator | 2025-10-09 10:29:09.871496 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-10-09 10:29:09.871502 | orchestrator | Thursday 09 October 2025 10:25:30 +0000 (0:00:01.133) 0:03:28.727 ****** 2025-10-09 10:29:09.871508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:29:09.871513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:29:09.871519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:29:09.871527 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.871533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:29:09.871538 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.871544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:29:09.871549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:29:09.871554 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.871560 | orchestrator | 2025-10-09 10:29:09.871576 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-10-09 10:29:09.871582 | orchestrator | Thursday 09 October 2025 10:25:31 +0000 (0:00:00.982) 0:03:29.709 ****** 2025-10-09 10:29:09.871587 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.871593 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.871598 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.871604 | orchestrator | 2025-10-09 10:29:09.871609 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-10-09 10:29:09.871615 | orchestrator | Thursday 09 October 2025 10:25:32 +0000 (0:00:01.313) 0:03:31.023 ****** 2025-10-09 10:29:09.871620 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.871625 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.871631 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.871636 | orchestrator | 2025-10-09 10:29:09.871641 | orchestrator | TASK [include_role : manila] *************************************************** 2025-10-09 10:29:09.871647 | orchestrator | Thursday 09 October 2025 10:25:35 +0000 (0:00:02.297) 0:03:33.321 ****** 2025-10-09 10:29:09.871652 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.871657 | orchestrator | 2025-10-09 10:29:09.871663 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-10-09 10:29:09.871668 | orchestrator | Thursday 09 October 2025 10:25:36 +0000 (0:00:01.676) 0:03:34.997 ****** 2025-10-09 10:29:09.871676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-09 10:29:09.871682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-09 10:29:09.871722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-09 10:29:09.871728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871769 | orchestrator | 2025-10-09 10:29:09.871777 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-10-09 10:29:09.871782 | orchestrator | Thursday 09 October 2025 10:25:40 +0000 (0:00:03.830) 0:03:38.828 ****** 2025-10-09 10:29:09.871788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-09 10:29:09.871797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871813 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.871830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-09 10:29:09.871838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871858 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.871864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-09 10:29:09.871873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.871901 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.871906 | orchestrator | 2025-10-09 10:29:09.871912 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-10-09 10:29:09.871917 | orchestrator | Thursday 09 October 2025 10:25:41 +0000 (0:00:00.743) 0:03:39.572 ****** 2025-10-09 10:29:09.871926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:29:09.871931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:29:09.871937 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.871942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:29:09.871948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:29:09.871953 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.871959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:29:09.871964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:29:09.871969 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.871975 | orchestrator | 2025-10-09 10:29:09.871980 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-10-09 10:29:09.871986 | orchestrator | Thursday 09 October 2025 10:25:42 +0000 (0:00:01.265) 0:03:40.837 ****** 2025-10-09 10:29:09.871991 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.871996 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.872001 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.872007 | orchestrator | 2025-10-09 10:29:09.872012 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-10-09 10:29:09.872017 | orchestrator | Thursday 09 October 2025 10:25:43 +0000 (0:00:01.368) 0:03:42.205 ****** 2025-10-09 10:29:09.872023 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.872028 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.872033 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.872039 | orchestrator | 2025-10-09 10:29:09.872044 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-10-09 10:29:09.872049 | orchestrator | Thursday 09 October 2025 10:25:46 +0000 (0:00:02.182) 0:03:44.388 ****** 2025-10-09 10:29:09.872055 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.872060 | orchestrator | 2025-10-09 10:29:09.872065 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-10-09 10:29:09.872071 | orchestrator | Thursday 09 October 2025 10:25:47 +0000 (0:00:01.454) 0:03:45.842 ****** 2025-10-09 10:29:09.872076 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:29:09.872081 | orchestrator | 2025-10-09 10:29:09.872087 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-10-09 10:29:09.872092 | orchestrator | Thursday 09 October 2025 10:25:50 +0000 (0:00:03.018) 0:03:48.861 ****** 2025-10-09 10:29:09.872113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:29:09.872123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:29:09.872128 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:29:09.872144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:29:09.872153 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:29:09.872177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:29:09.872183 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872188 | orchestrator | 2025-10-09 10:29:09.872194 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-10-09 10:29:09.872199 | orchestrator | Thursday 09 October 2025 10:25:52 +0000 (0:00:02.216) 0:03:51.077 ****** 2025-10-09 10:29:09.872238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:29:09.872249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:29:09.872254 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:29:09.872270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:29:09.872278 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:29:09.872299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:29:09.872305 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872310 | orchestrator | 2025-10-09 10:29:09.872316 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-10-09 10:29:09.872321 | orchestrator | Thursday 09 October 2025 10:25:55 +0000 (0:00:02.575) 0:03:53.652 ****** 2025-10-09 10:29:09.872338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:29:09.872347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:29:09.872356 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:29:09.872369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:29:09.872375 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:29:09.872386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:29:09.872392 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872397 | orchestrator | 2025-10-09 10:29:09.872402 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-10-09 10:29:09.872408 | orchestrator | Thursday 09 October 2025 10:25:58 +0000 (0:00:02.898) 0:03:56.551 ****** 2025-10-09 10:29:09.872413 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.872419 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.872424 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.872429 | orchestrator | 2025-10-09 10:29:09.872434 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-10-09 10:29:09.872440 | orchestrator | Thursday 09 October 2025 10:26:00 +0000 (0:00:01.789) 0:03:58.340 ****** 2025-10-09 10:29:09.872445 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872451 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872456 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872461 | orchestrator | 2025-10-09 10:29:09.872467 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-10-09 10:29:09.872472 | orchestrator | Thursday 09 October 2025 10:26:01 +0000 (0:00:01.692) 0:04:00.033 ****** 2025-10-09 10:29:09.872478 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872487 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872492 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872497 | orchestrator | 2025-10-09 10:29:09.872503 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-10-09 10:29:09.872508 | orchestrator | Thursday 09 October 2025 10:26:02 +0000 (0:00:00.338) 0:04:00.371 ****** 2025-10-09 10:29:09.872514 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.872519 | orchestrator | 2025-10-09 10:29:09.872524 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-10-09 10:29:09.872530 | orchestrator | Thursday 09 October 2025 10:26:03 +0000 (0:00:01.455) 0:04:01.826 ****** 2025-10-09 10:29:09.872540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20250928', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:r2025-10-09 10:29:09 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:09.872547 | orchestrator | 2025-10-09 10:29:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:09.872552 | orchestrator | o'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-09 10:29:09.872560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20250928', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-09 10:29:09.872567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20250928', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-09 10:29:09.872572 | orchestrator | 2025-10-09 10:29:09.872578 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-10-09 10:29:09.872583 | orchestrator | Thursday 09 October 2025 10:26:05 +0000 (0:00:01.499) 0:04:03.326 ****** 2025-10-09 10:29:09.872589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20250928', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-09 10:29:09.872598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20250928', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-09 10:29:09.872604 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872609 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20250928', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-09 10:29:09.872625 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872631 | orchestrator | 2025-10-09 10:29:09.872636 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-10-09 10:29:09.872641 | orchestrator | Thursday 09 October 2025 10:26:05 +0000 (0:00:00.469) 0:04:03.795 ****** 2025-10-09 10:29:09.872649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-09 10:29:09.872655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-09 10:29:09.872661 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872666 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-09 10:29:09.872677 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872682 | orchestrator | 2025-10-09 10:29:09.872688 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-10-09 10:29:09.872693 | orchestrator | Thursday 09 October 2025 10:26:06 +0000 (0:00:00.971) 0:04:04.767 ****** 2025-10-09 10:29:09.872698 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872704 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872709 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872714 | orchestrator | 2025-10-09 10:29:09.872720 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-10-09 10:29:09.872728 | orchestrator | Thursday 09 October 2025 10:26:07 +0000 (0:00:00.487) 0:04:05.255 ****** 2025-10-09 10:29:09.872734 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872739 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872744 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872750 | orchestrator | 2025-10-09 10:29:09.872755 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-10-09 10:29:09.872760 | orchestrator | Thursday 09 October 2025 10:26:08 +0000 (0:00:01.475) 0:04:06.730 ****** 2025-10-09 10:29:09.872766 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.872771 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.872776 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.872782 | orchestrator | 2025-10-09 10:29:09.872787 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-10-09 10:29:09.872792 | orchestrator | Thursday 09 October 2025 10:26:08 +0000 (0:00:00.337) 0:04:07.067 ****** 2025-10-09 10:29:09.872798 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.872803 | orchestrator | 2025-10-09 10:29:09.872809 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-10-09 10:29:09.872814 | orchestrator | Thursday 09 October 2025 10:26:10 +0000 (0:00:01.530) 0:04:08.598 ****** 2025-10-09 10:29:09.872819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:29:09.872836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250928', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:29:09.872865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:29:09.872875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250928', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250928', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.872905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.872911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:29:09.872941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.872950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.872962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250928', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.872971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:29:09.872977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.872984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.872995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250928', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.873024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250928', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.873068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:29:09.873085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250928', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:29:09.873116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250928', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250928', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.873191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873196 | orchestrator | 2025-10-09 10:29:09.873202 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-10-09 10:29:09.873207 | orchestrator | Thursday 09 October 2025 10:26:14 +0000 (0:00:04.491) 0:04:13.090 ****** 2025-10-09 10:29:09.873217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:29:09.873278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250928', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:29:09.873307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250928', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:29:09.873352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250928', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250928', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:29:09.873415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.873421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873439 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.873445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250928', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:29:09.873477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250928', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:29:09.873535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250928', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.873561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250928', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873581 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.873587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250928', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:29:09.873621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250928', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.873641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:29:09.873658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250928', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:29:09.873664 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.873670 | orchestrator | 2025-10-09 10:29:09.873675 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-10-09 10:29:09.873681 | orchestrator | Thursday 09 October 2025 10:26:16 +0000 (0:00:01.670) 0:04:14.760 ****** 2025-10-09 10:29:09.873686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:29:09.873691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:29:09.873698 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.873703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:29:09.873708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:29:09.873713 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.873718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:29:09.873723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:29:09.873727 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.873732 | orchestrator | 2025-10-09 10:29:09.873737 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-10-09 10:29:09.873742 | orchestrator | Thursday 09 October 2025 10:26:18 +0000 (0:00:02.139) 0:04:16.900 ****** 2025-10-09 10:29:09.873747 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.873751 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.873756 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.873761 | orchestrator | 2025-10-09 10:29:09.873769 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-10-09 10:29:09.873774 | orchestrator | Thursday 09 October 2025 10:26:20 +0000 (0:00:01.343) 0:04:18.243 ****** 2025-10-09 10:29:09.873778 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.873790 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.873795 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.873800 | orchestrator | 2025-10-09 10:29:09.873805 | orchestrator | TASK [include_role : placement] ************************************************ 2025-10-09 10:29:09.873809 | orchestrator | Thursday 09 October 2025 10:26:22 +0000 (0:00:02.326) 0:04:20.570 ****** 2025-10-09 10:29:09.873814 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.873819 | orchestrator | 2025-10-09 10:29:09.873824 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-10-09 10:29:09.873828 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:01.325) 0:04:21.895 ****** 2025-10-09 10:29:09.873834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.873849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.873856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.873861 | orchestrator | 2025-10-09 10:29:09.873866 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-10-09 10:29:09.873878 | orchestrator | Thursday 09 October 2025 10:26:27 +0000 (0:00:03.967) 0:04:25.863 ****** 2025-10-09 10:29:09.873887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.873892 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.873897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.873902 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.873917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.873922 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.873927 | orchestrator | 2025-10-09 10:29:09.873932 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-10-09 10:29:09.873936 | orchestrator | Thursday 09 October 2025 10:26:28 +0000 (0:00:00.597) 0:04:26.461 ****** 2025-10-09 10:29:09.873941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:29:09.873949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:29:09.873955 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.873959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:29:09.873968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:29:09.873973 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.873978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:29:09.873983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:29:09.873988 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.873992 | orchestrator | 2025-10-09 10:29:09.873997 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-10-09 10:29:09.874002 | orchestrator | Thursday 09 October 2025 10:26:29 +0000 (0:00:00.849) 0:04:27.311 ****** 2025-10-09 10:29:09.874007 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.874011 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.874036 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.874041 | orchestrator | 2025-10-09 10:29:09.874046 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-10-09 10:29:09.874051 | orchestrator | Thursday 09 October 2025 10:26:30 +0000 (0:00:01.351) 0:04:28.663 ****** 2025-10-09 10:29:09.874056 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.874060 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.874065 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.874070 | orchestrator | 2025-10-09 10:29:09.874075 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-10-09 10:29:09.874079 | orchestrator | Thursday 09 October 2025 10:26:32 +0000 (0:00:02.221) 0:04:30.885 ****** 2025-10-09 10:29:09.874084 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.874089 | orchestrator | 2025-10-09 10:29:09.874094 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-10-09 10:29:09.874099 | orchestrator | Thursday 09 October 2025 10:26:34 +0000 (0:00:01.563) 0:04:32.448 ****** 2025-10-09 10:29:09.874113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.874119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.874145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.874177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874187 | orchestrator | 2025-10-09 10:29:09.874192 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-10-09 10:29:09.874197 | orchestrator | Thursday 09 October 2025 10:26:38 +0000 (0:00:04.720) 0:04:37.169 ****** 2025-10-09 10:29:09.874202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.874217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874242 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.874250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.874256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874266 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.874276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.874287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.874297 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.874302 | orchestrator | 2025-10-09 10:29:09.874307 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-10-09 10:29:09.874311 | orchestrator | Thursday 09 October 2025 10:26:40 +0000 (0:00:01.407) 0:04:38.577 ****** 2025-10-09 10:29:09.874317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874337 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.874342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874365 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.874370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:29:09.874400 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.874404 | orchestrator | 2025-10-09 10:29:09.874409 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-10-09 10:29:09.874414 | orchestrator | Thursday 09 October 2025 10:26:41 +0000 (0:00:00.996) 0:04:39.573 ****** 2025-10-09 10:29:09.874419 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.874424 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.874428 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.874433 | orchestrator | 2025-10-09 10:29:09.874438 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-10-09 10:29:09.874445 | orchestrator | Thursday 09 October 2025 10:26:42 +0000 (0:00:01.533) 0:04:41.107 ****** 2025-10-09 10:29:09.874450 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.874455 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.874460 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.874464 | orchestrator | 2025-10-09 10:29:09.874469 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-10-09 10:29:09.874474 | orchestrator | Thursday 09 October 2025 10:26:45 +0000 (0:00:02.220) 0:04:43.327 ****** 2025-10-09 10:29:09.874479 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.874484 | orchestrator | 2025-10-09 10:29:09.874489 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-10-09 10:29:09.874493 | orchestrator | Thursday 09 October 2025 10:26:46 +0000 (0:00:01.726) 0:04:45.053 ****** 2025-10-09 10:29:09.874498 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-10-09 10:29:09.874503 | orchestrator | 2025-10-09 10:29:09.874508 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-10-09 10:29:09.874513 | orchestrator | Thursday 09 October 2025 10:26:47 +0000 (0:00:00.875) 0:04:45.929 ****** 2025-10-09 10:29:09.874518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-09 10:29:09.874523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-09 10:29:09.874532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-09 10:29:09.874537 | orchestrator | 2025-10-09 10:29:09.874542 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-10-09 10:29:09.874547 | orchestrator | Thursday 09 October 2025 10:26:52 +0000 (0:00:04.740) 0:04:50.670 ****** 2025-10-09 10:29:09.874561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874566 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.874571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874576 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.874581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874586 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.874591 | orchestrator | 2025-10-09 10:29:09.874596 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-10-09 10:29:09.874639 | orchestrator | Thursday 09 October 2025 10:26:54 +0000 (0:00:01.575) 0:04:52.246 ****** 2025-10-09 10:29:09.874651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:29:09.874657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:29:09.874662 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.874667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:29:09.874672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:29:09.874681 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.874685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:29:09.874691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:29:09.874696 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.874700 | orchestrator | 2025-10-09 10:29:09.874705 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-09 10:29:09.874710 | orchestrator | Thursday 09 October 2025 10:26:55 +0000 (0:00:01.805) 0:04:54.052 ****** 2025-10-09 10:29:09.874715 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.874720 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.874724 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.874729 | orchestrator | 2025-10-09 10:29:09.874734 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-09 10:29:09.874739 | orchestrator | Thursday 09 October 2025 10:26:58 +0000 (0:00:02.545) 0:04:56.597 ****** 2025-10-09 10:29:09.874743 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.874748 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.874753 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.874758 | orchestrator | 2025-10-09 10:29:09.874762 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-10-09 10:29:09.874767 | orchestrator | Thursday 09 October 2025 10:27:01 +0000 (0:00:03.090) 0:04:59.688 ****** 2025-10-09 10:29:09.874772 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-10-09 10:29:09.874777 | orchestrator | 2025-10-09 10:29:09.874782 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-10-09 10:29:09.874787 | orchestrator | Thursday 09 October 2025 10:27:02 +0000 (0:00:01.509) 0:05:01.198 ****** 2025-10-09 10:29:09.874802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874808 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.874813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874818 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.874825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874834 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.874839 | orchestrator | 2025-10-09 10:29:09.874843 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-10-09 10:29:09.874848 | orchestrator | Thursday 09 October 2025 10:27:04 +0000 (0:00:01.305) 0:05:02.503 ****** 2025-10-09 10:29:09.874853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874858 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.874863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874868 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.874873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:29:09.874878 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.874883 | orchestrator | 2025-10-09 10:29:09.874887 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-10-09 10:29:09.874892 | orchestrator | Thursday 09 October 2025 10:27:05 +0000 (0:00:01.406) 0:05:03.909 ****** 2025-10-09 10:29:09.874897 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.874902 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.874906 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.874911 | orchestrator | 2025-10-09 10:29:09.874916 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-09 10:29:09.874921 | orchestrator | Thursday 09 October 2025 10:27:07 +0000 (0:00:02.049) 0:05:05.959 ****** 2025-10-09 10:29:09.874925 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.874930 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.874935 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.874940 | orchestrator | 2025-10-09 10:29:09.874944 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-09 10:29:09.874958 | orchestrator | Thursday 09 October 2025 10:27:10 +0000 (0:00:02.466) 0:05:08.426 ****** 2025-10-09 10:29:09.874964 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.874968 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.874973 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.874978 | orchestrator | 2025-10-09 10:29:09.874983 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-10-09 10:29:09.874987 | orchestrator | Thursday 09 October 2025 10:27:13 +0000 (0:00:03.029) 0:05:11.455 ****** 2025-10-09 10:29:09.874992 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-10-09 10:29:09.875001 | orchestrator | 2025-10-09 10:29:09.875005 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-10-09 10:29:09.875010 | orchestrator | Thursday 09 October 2025 10:27:14 +0000 (0:00:00.920) 0:05:12.375 ****** 2025-10-09 10:29:09.875018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:29:09.875023 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:29:09.875033 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:29:09.875043 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875047 | orchestrator | 2025-10-09 10:29:09.875052 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-10-09 10:29:09.875057 | orchestrator | Thursday 09 October 2025 10:27:15 +0000 (0:00:01.400) 0:05:13.776 ****** 2025-10-09 10:29:09.875062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:29:09.875067 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:29:09.875077 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:29:09.875101 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875106 | orchestrator | 2025-10-09 10:29:09.875111 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-10-09 10:29:09.875115 | orchestrator | Thursday 09 October 2025 10:27:16 +0000 (0:00:01.406) 0:05:15.182 ****** 2025-10-09 10:29:09.875120 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875125 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875129 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875134 | orchestrator | 2025-10-09 10:29:09.875139 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-09 10:29:09.875144 | orchestrator | Thursday 09 October 2025 10:27:18 +0000 (0:00:01.649) 0:05:16.832 ****** 2025-10-09 10:29:09.875148 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.875153 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.875158 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.875163 | orchestrator | 2025-10-09 10:29:09.875167 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-09 10:29:09.875172 | orchestrator | Thursday 09 October 2025 10:27:21 +0000 (0:00:02.547) 0:05:19.379 ****** 2025-10-09 10:29:09.875177 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.875182 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.875187 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.875191 | orchestrator | 2025-10-09 10:29:09.875198 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-10-09 10:29:09.875203 | orchestrator | Thursday 09 October 2025 10:27:24 +0000 (0:00:03.432) 0:05:22.812 ****** 2025-10-09 10:29:09.875208 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.875213 | orchestrator | 2025-10-09 10:29:09.875217 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-10-09 10:29:09.875222 | orchestrator | Thursday 09 October 2025 10:27:26 +0000 (0:00:01.753) 0:05:24.565 ****** 2025-10-09 10:29:09.875239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.875245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:29:09.875250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.875287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.875292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:29:09.875297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.875325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.875332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:29:09.875338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.875356 | orchestrator | 2025-10-09 10:29:09.875360 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-10-09 10:29:09.875365 | orchestrator | Thursday 09 October 2025 10:27:30 +0000 (0:00:03.768) 0:05:28.334 ****** 2025-10-09 10:29:09.875380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.875385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:29:09.875393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.875408 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.875432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:29:09.875437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.875455 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.875470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:29:09.875475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:29:09.875497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:29:09.875502 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875507 | orchestrator | 2025-10-09 10:29:09.875512 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-10-09 10:29:09.875517 | orchestrator | Thursday 09 October 2025 10:27:30 +0000 (0:00:00.785) 0:05:29.120 ****** 2025-10-09 10:29:09.875522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:29:09.875527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:29:09.875532 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:29:09.875541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:29:09.875550 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:29:09.875559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:29:09.875564 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875569 | orchestrator | 2025-10-09 10:29:09.875574 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-10-09 10:29:09.875578 | orchestrator | Thursday 09 October 2025 10:27:32 +0000 (0:00:01.743) 0:05:30.864 ****** 2025-10-09 10:29:09.875583 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.875588 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.875593 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.875597 | orchestrator | 2025-10-09 10:29:09.875602 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-10-09 10:29:09.875607 | orchestrator | Thursday 09 October 2025 10:27:34 +0000 (0:00:01.483) 0:05:32.347 ****** 2025-10-09 10:29:09.875612 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.875617 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.875621 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.875626 | orchestrator | 2025-10-09 10:29:09.875631 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-10-09 10:29:09.875635 | orchestrator | Thursday 09 October 2025 10:27:36 +0000 (0:00:02.271) 0:05:34.619 ****** 2025-10-09 10:29:09.875640 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.875645 | orchestrator | 2025-10-09 10:29:09.875650 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-10-09 10:29:09.875655 | orchestrator | Thursday 09 October 2025 10:27:37 +0000 (0:00:01.505) 0:05:36.124 ****** 2025-10-09 10:29:09.875669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:29:09.875677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:29:09.875686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:29:09.875691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:29:09.875704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:29:09.875713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:29:09.875721 | orchestrator | 2025-10-09 10:29:09.875726 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-10-09 10:29:09.875731 | orchestrator | Thursday 09 October 2025 10:27:43 +0000 (0:00:05.659) 0:05:41.784 ****** 2025-10-09 10:29:09.875736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:29:09.875742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:29:09.875747 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:29:09.875769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:29:09.875778 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:29:09.875789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:29:09.875794 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875799 | orchestrator | 2025-10-09 10:29:09.875804 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-10-09 10:29:09.875809 | orchestrator | Thursday 09 October 2025 10:27:44 +0000 (0:00:00.697) 0:05:42.482 ****** 2025-10-09 10:29:09.875814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-09 10:29:09.875819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:29:09.875833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:29:09.875838 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-09 10:29:09.875848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:29:09.875853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:29:09.875861 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-09 10:29:09.875873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:29:09.875878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:29:09.875883 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875888 | orchestrator | 2025-10-09 10:29:09.875893 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-10-09 10:29:09.875898 | orchestrator | Thursday 09 October 2025 10:27:45 +0000 (0:00:01.042) 0:05:43.525 ****** 2025-10-09 10:29:09.875903 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875907 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875912 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875917 | orchestrator | 2025-10-09 10:29:09.875922 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-10-09 10:29:09.875927 | orchestrator | Thursday 09 October 2025 10:27:46 +0000 (0:00:00.914) 0:05:44.439 ****** 2025-10-09 10:29:09.875931 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.875936 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.875941 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.875946 | orchestrator | 2025-10-09 10:29:09.875950 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-10-09 10:29:09.875955 | orchestrator | Thursday 09 October 2025 10:27:47 +0000 (0:00:01.465) 0:05:45.905 ****** 2025-10-09 10:29:09.875960 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.875965 | orchestrator | 2025-10-09 10:29:09.875970 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-10-09 10:29:09.875975 | orchestrator | Thursday 09 October 2025 10:27:49 +0000 (0:00:01.483) 0:05:47.388 ****** 2025-10-09 10:29:09.875980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:29:09.875985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:29:09.875999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:29:09.876021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:29:09.876031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:29:09.876061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:29:09.876073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:29:09.876102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:29:09.876110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:29:09.876138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:29:09.876143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:29:09.876166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:29:09.876178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876196 | orchestrator | 2025-10-09 10:29:09.876200 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-10-09 10:29:09.876205 | orchestrator | Thursday 09 October 2025 10:27:53 +0000 (0:00:04.576) 0:05:51.965 ****** 2025-10-09 10:29:09.876210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:29:09.876215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:29:09.876221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:29:09.876283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:29:09.876288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876306 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:29:09.876320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:29:09.876327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:29:09.876356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:29:09.876362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876379 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:29:09.876392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:29:09.876398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:29:09.876426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:29:09.876435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:29:09.876445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:29:09.876453 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876458 | orchestrator | 2025-10-09 10:29:09.876463 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-10-09 10:29:09.876468 | orchestrator | Thursday 09 October 2025 10:27:55 +0000 (0:00:01.349) 0:05:53.314 ****** 2025-10-09 10:29:09.876473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-09 10:29:09.876478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-09 10:29:09.876483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:29:09.876491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:29:09.876496 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-09 10:29:09.876506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-09 10:29:09.876511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:29:09.876520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:29:09.876525 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-09 10:29:09.876535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-09 10:29:09.876540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:29:09.876545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:29:09.876550 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876554 | orchestrator | 2025-10-09 10:29:09.876559 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-10-09 10:29:09.876564 | orchestrator | Thursday 09 October 2025 10:27:56 +0000 (0:00:01.079) 0:05:54.394 ****** 2025-10-09 10:29:09.876569 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876574 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876578 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876583 | orchestrator | 2025-10-09 10:29:09.876588 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-10-09 10:29:09.876593 | orchestrator | Thursday 09 October 2025 10:27:56 +0000 (0:00:00.495) 0:05:54.890 ****** 2025-10-09 10:29:09.876598 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876603 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876607 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876612 | orchestrator | 2025-10-09 10:29:09.876617 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-10-09 10:29:09.876625 | orchestrator | Thursday 09 October 2025 10:27:58 +0000 (0:00:01.536) 0:05:56.426 ****** 2025-10-09 10:29:09.876629 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.876634 | orchestrator | 2025-10-09 10:29:09.876639 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-10-09 10:29:09.876644 | orchestrator | Thursday 09 October 2025 10:28:00 +0000 (0:00:01.999) 0:05:58.425 ****** 2025-10-09 10:29:09.876651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:29:09.876661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:29:09.876666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:29:09.876671 | orchestrator | 2025-10-09 10:29:09.876676 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-10-09 10:29:09.876681 | orchestrator | Thursday 09 October 2025 10:28:02 +0000 (0:00:02.679) 0:06:01.105 ****** 2025-10-09 10:29:09.876689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-09 10:29:09.876697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-09 10:29:09.876706 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876711 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-09 10:29:09.876721 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876726 | orchestrator | 2025-10-09 10:29:09.876730 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-10-09 10:29:09.876735 | orchestrator | Thursday 09 October 2025 10:28:03 +0000 (0:00:00.469) 0:06:01.575 ****** 2025-10-09 10:29:09.876740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-09 10:29:09.876745 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-09 10:29:09.876754 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-09 10:29:09.876764 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876769 | orchestrator | 2025-10-09 10:29:09.876773 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-10-09 10:29:09.876778 | orchestrator | Thursday 09 October 2025 10:28:04 +0000 (0:00:01.094) 0:06:02.669 ****** 2025-10-09 10:29:09.876783 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876787 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876792 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876797 | orchestrator | 2025-10-09 10:29:09.876802 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-10-09 10:29:09.876806 | orchestrator | Thursday 09 October 2025 10:28:05 +0000 (0:00:00.569) 0:06:03.239 ****** 2025-10-09 10:29:09.876811 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876816 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876820 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876825 | orchestrator | 2025-10-09 10:29:09.876830 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-10-09 10:29:09.876835 | orchestrator | Thursday 09 October 2025 10:28:06 +0000 (0:00:01.501) 0:06:04.740 ****** 2025-10-09 10:29:09.876839 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:29:09.876844 | orchestrator | 2025-10-09 10:29:09.876852 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-10-09 10:29:09.876860 | orchestrator | Thursday 09 October 2025 10:28:08 +0000 (0:00:01.909) 0:06:06.650 ****** 2025-10-09 10:29:09.876865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.876873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.876878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.876883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.876895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.876903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-09 10:29:09.876908 | orchestrator | 2025-10-09 10:29:09.876913 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-10-09 10:29:09.876918 | orchestrator | Thursday 09 October 2025 10:28:15 +0000 (0:00:06.830) 0:06:13.480 ****** 2025-10-09 10:29:09.876923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.876928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.876937 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.876945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.876953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.876958 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.876963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.876968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250928', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-09 10:29:09.876973 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.876981 | orchestrator | 2025-10-09 10:29:09.876986 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-10-09 10:29:09.876991 | orchestrator | Thursday 09 October 2025 10:28:16 +0000 (0:00:00.802) 0:06:14.283 ****** 2025-10-09 10:29:09.876995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877018 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877046 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:29:09.877071 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877076 | orchestrator | 2025-10-09 10:29:09.877080 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-10-09 10:29:09.877085 | orchestrator | Thursday 09 October 2025 10:28:17 +0000 (0:00:01.775) 0:06:16.058 ****** 2025-10-09 10:29:09.877090 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.877095 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.877099 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.877104 | orchestrator | 2025-10-09 10:29:09.877109 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-10-09 10:29:09.877117 | orchestrator | Thursday 09 October 2025 10:28:19 +0000 (0:00:01.314) 0:06:17.373 ****** 2025-10-09 10:29:09.877122 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.877127 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.877131 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.877136 | orchestrator | 2025-10-09 10:29:09.877141 | orchestrator | TASK [include_role : swift] **************************************************** 2025-10-09 10:29:09.877146 | orchestrator | Thursday 09 October 2025 10:28:21 +0000 (0:00:02.189) 0:06:19.562 ****** 2025-10-09 10:29:09.877150 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877155 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877160 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877165 | orchestrator | 2025-10-09 10:29:09.877170 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-10-09 10:29:09.877174 | orchestrator | Thursday 09 October 2025 10:28:21 +0000 (0:00:00.351) 0:06:19.914 ****** 2025-10-09 10:29:09.877179 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877184 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877189 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877193 | orchestrator | 2025-10-09 10:29:09.877198 | orchestrator | TASK [include_role : trove] **************************************************** 2025-10-09 10:29:09.877203 | orchestrator | Thursday 09 October 2025 10:28:22 +0000 (0:00:00.399) 0:06:20.313 ****** 2025-10-09 10:29:09.877208 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877213 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877217 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877222 | orchestrator | 2025-10-09 10:29:09.877261 | orchestrator | TASK [include_role : venus] **************************************************** 2025-10-09 10:29:09.877266 | orchestrator | Thursday 09 October 2025 10:28:22 +0000 (0:00:00.671) 0:06:20.985 ****** 2025-10-09 10:29:09.877271 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877275 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877280 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877285 | orchestrator | 2025-10-09 10:29:09.877290 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-10-09 10:29:09.877298 | orchestrator | Thursday 09 October 2025 10:28:23 +0000 (0:00:00.369) 0:06:21.355 ****** 2025-10-09 10:29:09.877303 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877307 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877312 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877317 | orchestrator | 2025-10-09 10:29:09.877322 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-10-09 10:29:09.877326 | orchestrator | Thursday 09 October 2025 10:28:23 +0000 (0:00:00.360) 0:06:21.716 ****** 2025-10-09 10:29:09.877331 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877336 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877341 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877346 | orchestrator | 2025-10-09 10:29:09.877350 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-10-09 10:29:09.877355 | orchestrator | Thursday 09 October 2025 10:28:24 +0000 (0:00:00.896) 0:06:22.612 ****** 2025-10-09 10:29:09.877360 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877365 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877370 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877374 | orchestrator | 2025-10-09 10:29:09.877379 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-10-09 10:29:09.877384 | orchestrator | Thursday 09 October 2025 10:28:25 +0000 (0:00:00.730) 0:06:23.343 ****** 2025-10-09 10:29:09.877389 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877394 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877398 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877403 | orchestrator | 2025-10-09 10:29:09.877411 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-10-09 10:29:09.877416 | orchestrator | Thursday 09 October 2025 10:28:25 +0000 (0:00:00.386) 0:06:23.729 ****** 2025-10-09 10:29:09.877425 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877430 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877435 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877439 | orchestrator | 2025-10-09 10:29:09.877444 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-10-09 10:29:09.877449 | orchestrator | Thursday 09 October 2025 10:28:26 +0000 (0:00:00.838) 0:06:24.568 ****** 2025-10-09 10:29:09.877454 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877458 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877463 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877468 | orchestrator | 2025-10-09 10:29:09.877473 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-10-09 10:29:09.877478 | orchestrator | Thursday 09 October 2025 10:28:27 +0000 (0:00:01.204) 0:06:25.773 ****** 2025-10-09 10:29:09.877482 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877487 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877492 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877496 | orchestrator | 2025-10-09 10:29:09.877501 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-10-09 10:29:09.877506 | orchestrator | Thursday 09 October 2025 10:28:28 +0000 (0:00:01.012) 0:06:26.786 ****** 2025-10-09 10:29:09.877511 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.877516 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.877521 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.877525 | orchestrator | 2025-10-09 10:29:09.877530 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-10-09 10:29:09.877535 | orchestrator | Thursday 09 October 2025 10:28:38 +0000 (0:00:10.046) 0:06:36.832 ****** 2025-10-09 10:29:09.877540 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877545 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877549 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877553 | orchestrator | 2025-10-09 10:29:09.877558 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-10-09 10:29:09.877563 | orchestrator | Thursday 09 October 2025 10:28:39 +0000 (0:00:00.765) 0:06:37.598 ****** 2025-10-09 10:29:09.877567 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.877572 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.877576 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.877581 | orchestrator | 2025-10-09 10:29:09.877585 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-10-09 10:29:09.877590 | orchestrator | Thursday 09 October 2025 10:28:47 +0000 (0:00:08.247) 0:06:45.845 ****** 2025-10-09 10:29:09.877594 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877599 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877603 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877608 | orchestrator | 2025-10-09 10:29:09.877612 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-10-09 10:29:09.877617 | orchestrator | Thursday 09 October 2025 10:28:51 +0000 (0:00:04.263) 0:06:50.109 ****** 2025-10-09 10:29:09.877622 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:29:09.877626 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:29:09.877631 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:29:09.877635 | orchestrator | 2025-10-09 10:29:09.877640 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-10-09 10:29:09.877644 | orchestrator | Thursday 09 October 2025 10:29:01 +0000 (0:00:09.454) 0:06:59.563 ****** 2025-10-09 10:29:09.877649 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877653 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877658 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877662 | orchestrator | 2025-10-09 10:29:09.877667 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-10-09 10:29:09.877671 | orchestrator | Thursday 09 October 2025 10:29:01 +0000 (0:00:00.365) 0:06:59.928 ****** 2025-10-09 10:29:09.877676 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877684 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877688 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877693 | orchestrator | 2025-10-09 10:29:09.877697 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-10-09 10:29:09.877702 | orchestrator | Thursday 09 October 2025 10:29:02 +0000 (0:00:00.377) 0:07:00.305 ****** 2025-10-09 10:29:09.877706 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877711 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877715 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877720 | orchestrator | 2025-10-09 10:29:09.877724 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-10-09 10:29:09.877729 | orchestrator | Thursday 09 October 2025 10:29:02 +0000 (0:00:00.746) 0:07:01.052 ****** 2025-10-09 10:29:09.877736 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877741 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877745 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877750 | orchestrator | 2025-10-09 10:29:09.877754 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-10-09 10:29:09.877759 | orchestrator | Thursday 09 October 2025 10:29:03 +0000 (0:00:00.374) 0:07:01.427 ****** 2025-10-09 10:29:09.877763 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877768 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877772 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877777 | orchestrator | 2025-10-09 10:29:09.877781 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-10-09 10:29:09.877786 | orchestrator | Thursday 09 October 2025 10:29:03 +0000 (0:00:00.374) 0:07:01.801 ****** 2025-10-09 10:29:09.877791 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:29:09.877795 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:29:09.877800 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:29:09.877804 | orchestrator | 2025-10-09 10:29:09.877809 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-10-09 10:29:09.877813 | orchestrator | Thursday 09 October 2025 10:29:03 +0000 (0:00:00.366) 0:07:02.167 ****** 2025-10-09 10:29:09.877818 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877822 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877827 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877831 | orchestrator | 2025-10-09 10:29:09.877836 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-10-09 10:29:09.877840 | orchestrator | Thursday 09 October 2025 10:29:05 +0000 (0:00:01.336) 0:07:03.503 ****** 2025-10-09 10:29:09.877845 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:29:09.877849 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:29:09.877854 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:29:09.877859 | orchestrator | 2025-10-09 10:29:09.877863 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:29:09.877868 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-09 10:29:09.877873 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-09 10:29:09.877878 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-09 10:29:09.877882 | orchestrator | 2025-10-09 10:29:09.877887 | orchestrator | 2025-10-09 10:29:09.877891 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:29:09.877896 | orchestrator | Thursday 09 October 2025 10:29:06 +0000 (0:00:00.898) 0:07:04.402 ****** 2025-10-09 10:29:09.877901 | orchestrator | =============================================================================== 2025-10-09 10:29:09.877905 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.05s 2025-10-09 10:29:09.877910 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.45s 2025-10-09 10:29:09.877920 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.25s 2025-10-09 10:29:09.877924 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.53s 2025-10-09 10:29:09.877929 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.25s 2025-10-09 10:29:09.877933 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.83s 2025-10-09 10:29:09.877938 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.52s 2025-10-09 10:29:09.877943 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.66s 2025-10-09 10:29:09.877947 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.55s 2025-10-09 10:29:09.877951 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.32s 2025-10-09 10:29:09.877956 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.18s 2025-10-09 10:29:09.877960 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.86s 2025-10-09 10:29:09.877965 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.74s 2025-10-09 10:29:09.877969 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.72s 2025-10-09 10:29:09.877974 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.60s 2025-10-09 10:29:09.877978 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.58s 2025-10-09 10:29:09.877983 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.49s 2025-10-09 10:29:09.877988 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.38s 2025-10-09 10:29:09.877992 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.38s 2025-10-09 10:29:09.877997 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.26s 2025-10-09 10:29:12.923644 | orchestrator | 2025-10-09 10:29:12 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:12.925472 | orchestrator | 2025-10-09 10:29:12 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:12.927392 | orchestrator | 2025-10-09 10:29:12 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:12.927419 | orchestrator | 2025-10-09 10:29:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:15.975489 | orchestrator | 2025-10-09 10:29:15 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:15.976575 | orchestrator | 2025-10-09 10:29:15 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:15.978304 | orchestrator | 2025-10-09 10:29:15 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:15.978672 | orchestrator | 2025-10-09 10:29:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:19.016852 | orchestrator | 2025-10-09 10:29:19 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:19.016947 | orchestrator | 2025-10-09 10:29:19 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:19.017897 | orchestrator | 2025-10-09 10:29:19 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:19.018123 | orchestrator | 2025-10-09 10:29:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:22.055934 | orchestrator | 2025-10-09 10:29:22 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:22.056445 | orchestrator | 2025-10-09 10:29:22 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:22.057298 | orchestrator | 2025-10-09 10:29:22 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:22.057347 | orchestrator | 2025-10-09 10:29:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:25.093134 | orchestrator | 2025-10-09 10:29:25 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:25.094167 | orchestrator | 2025-10-09 10:29:25 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:25.095139 | orchestrator | 2025-10-09 10:29:25 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:25.095172 | orchestrator | 2025-10-09 10:29:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:28.141403 | orchestrator | 2025-10-09 10:29:28 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:28.142118 | orchestrator | 2025-10-09 10:29:28 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:28.144442 | orchestrator | 2025-10-09 10:29:28 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:28.144462 | orchestrator | 2025-10-09 10:29:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:31.205882 | orchestrator | 2025-10-09 10:29:31 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:31.208123 | orchestrator | 2025-10-09 10:29:31 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:31.210617 | orchestrator | 2025-10-09 10:29:31 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:31.210859 | orchestrator | 2025-10-09 10:29:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:34.251050 | orchestrator | 2025-10-09 10:29:34 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:34.251267 | orchestrator | 2025-10-09 10:29:34 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:34.252597 | orchestrator | 2025-10-09 10:29:34 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:34.252629 | orchestrator | 2025-10-09 10:29:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:37.300753 | orchestrator | 2025-10-09 10:29:37 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:37.302419 | orchestrator | 2025-10-09 10:29:37 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:37.303680 | orchestrator | 2025-10-09 10:29:37 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:37.303705 | orchestrator | 2025-10-09 10:29:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:40.361941 | orchestrator | 2025-10-09 10:29:40 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:40.362080 | orchestrator | 2025-10-09 10:29:40 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:40.364360 | orchestrator | 2025-10-09 10:29:40 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:40.364385 | orchestrator | 2025-10-09 10:29:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:43.402692 | orchestrator | 2025-10-09 10:29:43 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:43.404535 | orchestrator | 2025-10-09 10:29:43 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:43.405704 | orchestrator | 2025-10-09 10:29:43 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:43.405740 | orchestrator | 2025-10-09 10:29:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:46.468028 | orchestrator | 2025-10-09 10:29:46 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:46.468125 | orchestrator | 2025-10-09 10:29:46 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:46.468139 | orchestrator | 2025-10-09 10:29:46 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:46.468151 | orchestrator | 2025-10-09 10:29:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:49.515662 | orchestrator | 2025-10-09 10:29:49 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:49.519009 | orchestrator | 2025-10-09 10:29:49 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:49.519550 | orchestrator | 2025-10-09 10:29:49 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:49.520090 | orchestrator | 2025-10-09 10:29:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:52.558881 | orchestrator | 2025-10-09 10:29:52 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:52.560736 | orchestrator | 2025-10-09 10:29:52 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:52.563162 | orchestrator | 2025-10-09 10:29:52 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:52.563192 | orchestrator | 2025-10-09 10:29:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:55.604116 | orchestrator | 2025-10-09 10:29:55 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:55.604756 | orchestrator | 2025-10-09 10:29:55 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:55.606434 | orchestrator | 2025-10-09 10:29:55 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:55.606545 | orchestrator | 2025-10-09 10:29:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:58.659887 | orchestrator | 2025-10-09 10:29:58 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:29:58.661459 | orchestrator | 2025-10-09 10:29:58 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:29:58.662971 | orchestrator | 2025-10-09 10:29:58 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:29:58.662999 | orchestrator | 2025-10-09 10:29:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:01.704772 | orchestrator | 2025-10-09 10:30:01 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:01.705481 | orchestrator | 2025-10-09 10:30:01 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:01.707408 | orchestrator | 2025-10-09 10:30:01 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:01.707436 | orchestrator | 2025-10-09 10:30:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:04.762197 | orchestrator | 2025-10-09 10:30:04 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:04.762968 | orchestrator | 2025-10-09 10:30:04 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:04.764323 | orchestrator | 2025-10-09 10:30:04 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:04.764449 | orchestrator | 2025-10-09 10:30:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:07.821100 | orchestrator | 2025-10-09 10:30:07 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:07.822765 | orchestrator | 2025-10-09 10:30:07 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:07.825947 | orchestrator | 2025-10-09 10:30:07 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:07.826370 | orchestrator | 2025-10-09 10:30:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:10.872343 | orchestrator | 2025-10-09 10:30:10 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:10.874642 | orchestrator | 2025-10-09 10:30:10 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:10.878173 | orchestrator | 2025-10-09 10:30:10 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:10.878285 | orchestrator | 2025-10-09 10:30:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:13.928833 | orchestrator | 2025-10-09 10:30:13 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:13.931124 | orchestrator | 2025-10-09 10:30:13 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:13.933289 | orchestrator | 2025-10-09 10:30:13 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:13.933321 | orchestrator | 2025-10-09 10:30:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:16.984432 | orchestrator | 2025-10-09 10:30:16 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:16.986975 | orchestrator | 2025-10-09 10:30:16 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:16.989911 | orchestrator | 2025-10-09 10:30:16 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:16.990498 | orchestrator | 2025-10-09 10:30:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:20.037030 | orchestrator | 2025-10-09 10:30:20 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:20.038848 | orchestrator | 2025-10-09 10:30:20 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:20.040489 | orchestrator | 2025-10-09 10:30:20 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:20.040896 | orchestrator | 2025-10-09 10:30:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:23.099639 | orchestrator | 2025-10-09 10:30:23 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:23.102106 | orchestrator | 2025-10-09 10:30:23 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:23.104723 | orchestrator | 2025-10-09 10:30:23 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:23.104756 | orchestrator | 2025-10-09 10:30:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:26.152389 | orchestrator | 2025-10-09 10:30:26 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:26.153648 | orchestrator | 2025-10-09 10:30:26 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:26.156038 | orchestrator | 2025-10-09 10:30:26 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:26.156088 | orchestrator | 2025-10-09 10:30:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:29.204671 | orchestrator | 2025-10-09 10:30:29 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:29.205640 | orchestrator | 2025-10-09 10:30:29 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:29.207614 | orchestrator | 2025-10-09 10:30:29 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:29.207652 | orchestrator | 2025-10-09 10:30:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:32.256421 | orchestrator | 2025-10-09 10:30:32 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:32.259759 | orchestrator | 2025-10-09 10:30:32 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:32.260947 | orchestrator | 2025-10-09 10:30:32 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:32.260975 | orchestrator | 2025-10-09 10:30:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:35.322081 | orchestrator | 2025-10-09 10:30:35 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:35.325643 | orchestrator | 2025-10-09 10:30:35 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:35.327722 | orchestrator | 2025-10-09 10:30:35 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:35.327755 | orchestrator | 2025-10-09 10:30:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:38.370105 | orchestrator | 2025-10-09 10:30:38 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:38.371881 | orchestrator | 2025-10-09 10:30:38 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:38.374488 | orchestrator | 2025-10-09 10:30:38 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:38.374677 | orchestrator | 2025-10-09 10:30:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:41.429179 | orchestrator | 2025-10-09 10:30:41 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:41.429319 | orchestrator | 2025-10-09 10:30:41 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:41.430649 | orchestrator | 2025-10-09 10:30:41 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:41.430727 | orchestrator | 2025-10-09 10:30:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:44.481388 | orchestrator | 2025-10-09 10:30:44 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:44.484074 | orchestrator | 2025-10-09 10:30:44 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:44.486295 | orchestrator | 2025-10-09 10:30:44 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:44.486323 | orchestrator | 2025-10-09 10:30:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:47.543997 | orchestrator | 2025-10-09 10:30:47 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:47.544869 | orchestrator | 2025-10-09 10:30:47 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:47.547497 | orchestrator | 2025-10-09 10:30:47 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:47.547698 | orchestrator | 2025-10-09 10:30:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:50.593429 | orchestrator | 2025-10-09 10:30:50 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:50.595910 | orchestrator | 2025-10-09 10:30:50 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:50.598909 | orchestrator | 2025-10-09 10:30:50 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:50.598937 | orchestrator | 2025-10-09 10:30:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:53.654732 | orchestrator | 2025-10-09 10:30:53 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:53.657575 | orchestrator | 2025-10-09 10:30:53 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:53.660493 | orchestrator | 2025-10-09 10:30:53 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:53.660536 | orchestrator | 2025-10-09 10:30:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:56.718399 | orchestrator | 2025-10-09 10:30:56 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:56.720757 | orchestrator | 2025-10-09 10:30:56 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:56.724057 | orchestrator | 2025-10-09 10:30:56 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:56.724094 | orchestrator | 2025-10-09 10:30:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:59.765670 | orchestrator | 2025-10-09 10:30:59 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:30:59.765766 | orchestrator | 2025-10-09 10:30:59 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state STARTED 2025-10-09 10:30:59.766774 | orchestrator | 2025-10-09 10:30:59 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:30:59.766799 | orchestrator | 2025-10-09 10:30:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:02.823806 | orchestrator | 2025-10-09 10:31:02 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:02.832122 | orchestrator | 2025-10-09 10:31:02 | INFO  | Task 70e0e7f1-cd38-4cc5-951d-017e2c120fe5 is in state SUCCESS 2025-10-09 10:31:02.834787 | orchestrator | 2025-10-09 10:31:02.834832 | orchestrator | 2025-10-09 10:31:02.834845 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-10-09 10:31:02.834857 | orchestrator | 2025-10-09 10:31:02.834868 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-10-09 10:31:02.834880 | orchestrator | Thursday 09 October 2025 10:19:15 +0000 (0:00:01.018) 0:00:01.018 ****** 2025-10-09 10:31:02.834892 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.834905 | orchestrator | 2025-10-09 10:31:02.835031 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-10-09 10:31:02.835045 | orchestrator | Thursday 09 October 2025 10:19:17 +0000 (0:00:01.589) 0:00:02.607 ****** 2025-10-09 10:31:02.835056 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.835068 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.835079 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.835090 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.835101 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.835112 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.835147 | orchestrator | 2025-10-09 10:31:02.835159 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-10-09 10:31:02.835170 | orchestrator | Thursday 09 October 2025 10:19:19 +0000 (0:00:02.276) 0:00:04.884 ****** 2025-10-09 10:31:02.835181 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.835192 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.835202 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.835213 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.835284 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.835296 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.835367 | orchestrator | 2025-10-09 10:31:02.835383 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-10-09 10:31:02.835404 | orchestrator | Thursday 09 October 2025 10:19:21 +0000 (0:00:01.408) 0:00:06.292 ****** 2025-10-09 10:31:02.835416 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.835429 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.835441 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.835453 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.835465 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.835478 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.835512 | orchestrator | 2025-10-09 10:31:02.835528 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-10-09 10:31:02.835540 | orchestrator | Thursday 09 October 2025 10:19:22 +0000 (0:00:01.256) 0:00:07.548 ****** 2025-10-09 10:31:02.835552 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.835590 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.835603 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.835615 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.835627 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.835640 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.835653 | orchestrator | 2025-10-09 10:31:02.835665 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-10-09 10:31:02.835677 | orchestrator | Thursday 09 October 2025 10:19:23 +0000 (0:00:01.069) 0:00:08.618 ****** 2025-10-09 10:31:02.835690 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.835702 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.835714 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.835726 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.835736 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.835813 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.835826 | orchestrator | 2025-10-09 10:31:02.835838 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-10-09 10:31:02.835849 | orchestrator | Thursday 09 October 2025 10:19:24 +0000 (0:00:00.901) 0:00:09.519 ****** 2025-10-09 10:31:02.835859 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.835870 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.835881 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.835892 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.835902 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.835913 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.835924 | orchestrator | 2025-10-09 10:31:02.835934 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-10-09 10:31:02.835945 | orchestrator | Thursday 09 October 2025 10:19:25 +0000 (0:00:01.143) 0:00:10.663 ****** 2025-10-09 10:31:02.835956 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.835968 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.835979 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.835990 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.836001 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.836011 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.836022 | orchestrator | 2025-10-09 10:31:02.836033 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-10-09 10:31:02.836044 | orchestrator | Thursday 09 October 2025 10:19:26 +0000 (0:00:00.839) 0:00:11.503 ****** 2025-10-09 10:31:02.836054 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.836065 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.836076 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.836087 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.836098 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.836109 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.836119 | orchestrator | 2025-10-09 10:31:02.836130 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-10-09 10:31:02.836141 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:01.074) 0:00:12.577 ****** 2025-10-09 10:31:02.836161 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:31:02.836173 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:31:02.836184 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:31:02.836195 | orchestrator | 2025-10-09 10:31:02.836206 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-10-09 10:31:02.836234 | orchestrator | Thursday 09 October 2025 10:19:28 +0000 (0:00:00.761) 0:00:13.339 ****** 2025-10-09 10:31:02.836246 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.836256 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.836342 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.836354 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.836365 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.836376 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.836387 | orchestrator | 2025-10-09 10:31:02.836412 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-10-09 10:31:02.836424 | orchestrator | Thursday 09 October 2025 10:19:29 +0000 (0:00:01.864) 0:00:15.204 ****** 2025-10-09 10:31:02.836435 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:31:02.836446 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:31:02.836457 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:31:02.836468 | orchestrator | 2025-10-09 10:31:02.836479 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-10-09 10:31:02.836490 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:04.459) 0:00:19.663 ****** 2025-10-09 10:31:02.836554 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:31:02.836566 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:31:02.836577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:31:02.836588 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.836599 | orchestrator | 2025-10-09 10:31:02.836610 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-10-09 10:31:02.836620 | orchestrator | Thursday 09 October 2025 10:19:35 +0000 (0:00:00.681) 0:00:20.345 ****** 2025-10-09 10:31:02.836633 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836653 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836665 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836676 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.836687 | orchestrator | 2025-10-09 10:31:02.836698 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-10-09 10:31:02.836709 | orchestrator | Thursday 09 October 2025 10:19:35 +0000 (0:00:00.914) 0:00:21.259 ****** 2025-10-09 10:31:02.836722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836736 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836755 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836766 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.836777 | orchestrator | 2025-10-09 10:31:02.836788 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-10-09 10:31:02.836799 | orchestrator | Thursday 09 October 2025 10:19:36 +0000 (0:00:00.212) 0:00:21.472 ****** 2025-10-09 10:31:02.836813 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-10-09 10:19:30.918067', 'end': '2025-10-09 10:19:31.189405', 'delta': '0:00:00.271338', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836835 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-10-09 10:19:31.817995', 'end': '2025-10-09 10:19:32.095135', 'delta': '0:00:00.277140', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836853 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-10-09 10:19:33.883006', 'end': '2025-10-09 10:19:34.153164', 'delta': '0:00:00.270158', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.836865 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.836876 | orchestrator | 2025-10-09 10:31:02.836887 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-10-09 10:31:02.836897 | orchestrator | Thursday 09 October 2025 10:19:36 +0000 (0:00:00.694) 0:00:22.166 ****** 2025-10-09 10:31:02.836908 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.836919 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.836930 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.836941 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.836952 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.836963 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.836980 | orchestrator | 2025-10-09 10:31:02.836991 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-10-09 10:31:02.837002 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:02.792) 0:00:24.959 ****** 2025-10-09 10:31:02.837013 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.837024 | orchestrator | 2025-10-09 10:31:02.837035 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-10-09 10:31:02.837046 | orchestrator | Thursday 09 October 2025 10:19:40 +0000 (0:00:01.087) 0:00:26.047 ****** 2025-10-09 10:31:02.837056 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837067 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.837078 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.837089 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.837100 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.837111 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.837121 | orchestrator | 2025-10-09 10:31:02.837132 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-10-09 10:31:02.837143 | orchestrator | Thursday 09 October 2025 10:19:44 +0000 (0:00:03.400) 0:00:29.448 ****** 2025-10-09 10:31:02.837154 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837164 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.837175 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.837186 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.837197 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.837207 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.837271 | orchestrator | 2025-10-09 10:31:02.837283 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-09 10:31:02.837294 | orchestrator | Thursday 09 October 2025 10:19:47 +0000 (0:00:03.165) 0:00:32.614 ****** 2025-10-09 10:31:02.837305 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837316 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.837327 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.837338 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.837504 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.837516 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.837527 | orchestrator | 2025-10-09 10:31:02.837538 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-10-09 10:31:02.837548 | orchestrator | Thursday 09 October 2025 10:19:49 +0000 (0:00:02.047) 0:00:34.662 ****** 2025-10-09 10:31:02.837559 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837570 | orchestrator | 2025-10-09 10:31:02.837581 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-10-09 10:31:02.837592 | orchestrator | Thursday 09 October 2025 10:19:49 +0000 (0:00:00.167) 0:00:34.829 ****** 2025-10-09 10:31:02.837602 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837613 | orchestrator | 2025-10-09 10:31:02.837623 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-09 10:31:02.837633 | orchestrator | Thursday 09 October 2025 10:19:49 +0000 (0:00:00.412) 0:00:35.242 ****** 2025-10-09 10:31:02.837642 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837652 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.837662 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.837671 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.837681 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.837691 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.837701 | orchestrator | 2025-10-09 10:31:02.837710 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-10-09 10:31:02.837726 | orchestrator | Thursday 09 October 2025 10:19:51 +0000 (0:00:01.345) 0:00:36.587 ****** 2025-10-09 10:31:02.837736 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837746 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.837755 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.837765 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.837775 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.837792 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.837801 | orchestrator | 2025-10-09 10:31:02.837811 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-10-09 10:31:02.837820 | orchestrator | Thursday 09 October 2025 10:19:53 +0000 (0:00:02.133) 0:00:38.721 ****** 2025-10-09 10:31:02.837830 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837839 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.837849 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.837858 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.837868 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.837877 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.837887 | orchestrator | 2025-10-09 10:31:02.837896 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-10-09 10:31:02.837906 | orchestrator | Thursday 09 October 2025 10:19:54 +0000 (0:00:01.267) 0:00:39.989 ****** 2025-10-09 10:31:02.837915 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.837925 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.837934 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.837944 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.837953 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.837963 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.837972 | orchestrator | 2025-10-09 10:31:02.837982 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-10-09 10:31:02.837992 | orchestrator | Thursday 09 October 2025 10:19:55 +0000 (0:00:00.963) 0:00:40.952 ****** 2025-10-09 10:31:02.838092 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.838109 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.838119 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.838129 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.838139 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.838148 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.838158 | orchestrator | 2025-10-09 10:31:02.838168 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-10-09 10:31:02.838210 | orchestrator | Thursday 09 October 2025 10:19:57 +0000 (0:00:01.392) 0:00:42.345 ****** 2025-10-09 10:31:02.838238 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.838248 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.838257 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.838267 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.838277 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.838286 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.838296 | orchestrator | 2025-10-09 10:31:02.838305 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-10-09 10:31:02.838315 | orchestrator | Thursday 09 October 2025 10:19:58 +0000 (0:00:01.293) 0:00:43.638 ****** 2025-10-09 10:31:02.838325 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.838334 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.838344 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.838353 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.838434 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.838444 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.838454 | orchestrator | 2025-10-09 10:31:02.838464 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-10-09 10:31:02.838474 | orchestrator | Thursday 09 October 2025 10:19:59 +0000 (0:00:00.878) 0:00:44.516 ****** 2025-10-09 10:31:02.838484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.838622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.838635 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.838645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54591ae8--508f--52be--a28f--431401b6d1ad-osd--block--54591ae8--508f--52be--a28f--431401b6d1ad', 'dm-uuid-LVM-EVsqi2xWuBrGH7ndZWt2fLOetE5tvdmnZAtuoviZdwNP6SNMqWPVWlqPBLzBSFKA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part1', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part14', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part15', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part16', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.838858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4e7e843--affd--596c--a641--84f7bc574f86-osd--block--b4e7e843--affd--596c--a641--84f7bc574f86', 'dm-uuid-LVM-K47ukKc3iSjQ5ctiCOYZ8X6UjrEi5wyygeSwvWYR8NJ9dB5ju47sNL755joY0PaZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.838894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part1', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part14', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part15', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part16', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.838944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.838955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.838991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--54591ae8--508f--52be--a28f--431401b6d1ad-osd--block--54591ae8--508f--52be--a28f--431401b6d1ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YNXba9-HJ0W-oMiN-czED-sHHj-nVGE-aLrrmj', 'scsi-0QEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16', 'scsi-SQEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b4e7e843--affd--596c--a641--84f7bc574f86-osd--block--b4e7e843--affd--596c--a641--84f7bc574f86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-haDIf7-Emsh-nQyd-5J3H-Zscp-VTbV-fvYdrM', 'scsi-0QEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057', 'scsi-SQEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2', 'scsi-SQEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839214 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.839274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef021fd3--cbd8--5c2d--a302--cb22746ffda4-osd--block--ef021fd3--cbd8--5c2d--a302--cb22746ffda4', 'dm-uuid-LVM-T6jYMBRL7dp9tkdsf0p0IlQUxDmrOpuqqEA9KWjQ2sMW5BlNPfyg7hM5mY97Cgfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dd550970--9f11--5339--ba56--252bcc7474b0-osd--block--dd550970--9f11--5339--ba56--252bcc7474b0', 'dm-uuid-LVM-klrAyY2R3EDbTRZMGtUluIGdB92UKAjKW2w66lfYmOvHKm44Ai0bKUegTQpG4d7O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839410 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.839423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ef021fd3--cbd8--5c2d--a302--cb22746ffda4-osd--block--ef021fd3--cbd8--5c2d--a302--cb22746ffda4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9WSgIJ-I51A-wqaD-NOuA-6Xc0-6Vv6-Zpuk9u', 'scsi-0QEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b', 'scsi-SQEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839431 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.839446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dd550970--9f11--5339--ba56--252bcc7474b0-osd--block--dd550970--9f11--5339--ba56--252bcc7474b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u7NTW9-427L-fzc2-K2VT-yHeG-nl1k-MmJnGV', 'scsi-0QEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c', 'scsi-SQEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256', 'scsi-SQEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839479 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.839487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--858e0050--0995--591a--8058--d3b9e3e91008-osd--block--858e0050--0995--591a--8058--d3b9e3e91008', 'dm-uuid-LVM-VIOu0bezJs3RTPoIDuH3unxVMbg1Fhi6gEIo2sRefuTOgjraOMKyPpDJ80xf1QI1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7a727a7b--5e51--5ff8--a4b5--d0215765e108-osd--block--7a727a7b--5e51--5ff8--a4b5--d0215765e108', 'dm-uuid-LVM-A9aLUKkNez0PszVsSh0FgEdmDyanvMJtkM1tAbeCrlF43Wkmx7BLo2d0HFHi7tUf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:31:02.839605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--858e0050--0995--591a--8058--d3b9e3e91008-osd--block--858e0050--0995--591a--8058--d3b9e3e91008'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-llnifo-6NnZ-JfLO-NBAB-djPH-cniD-jJCdVG', 'scsi-0QEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317', 'scsi-SQEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7a727a7b--5e51--5ff8--a4b5--d0215765e108-osd--block--7a727a7b--5e51--5ff8--a4b5--d0215765e108'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8FejOd-vDRE-VG31-PbtH-QBde-6z0E-hwqNI0', 'scsi-0QEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5', 'scsi-SQEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb', 'scsi-SQEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:31:02.839660 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.839668 | orchestrator | 2025-10-09 10:31:02.839676 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-10-09 10:31:02.839685 | orchestrator | Thursday 09 October 2025 10:20:02 +0000 (0:00:03.362) 0:00:47.879 ****** 2025-10-09 10:31:02.839693 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839711 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839720 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839728 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839736 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839744 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839759 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839772 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839785 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6296768-f199-4dcc-a087-8057fc5b5b41-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839809 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.839818 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839840 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839849 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839857 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839866 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839874 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839889 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839904 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839916 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839925 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839933 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839962 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839971 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.839992 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840001 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840015 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840025 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part1', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part14', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part15', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part16', 'scsi-SQEMU_QEMU_HARDDISK_896c45e4-ea0f-4d10-b682-ab8e70dacfed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840044 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840058 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part1', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part14', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part15', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part16', 'scsi-SQEMU_QEMU_HARDDISK_f028fdc4-497f-4226-9d58-f945a4d4fd39-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840067 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840081 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.840094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54591ae8--508f--52be--a28f--431401b6d1ad-osd--block--54591ae8--508f--52be--a28f--431401b6d1ad', 'dm-uuid-LVM-EVsqi2xWuBrGH7ndZWt2fLOetE5tvdmnZAtuoviZdwNP6SNMqWPVWlqPBLzBSFKA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4e7e843--affd--596c--a641--84f7bc574f86-osd--block--b4e7e843--affd--596c--a641--84f7bc574f86', 'dm-uuid-LVM-K47ukKc3iSjQ5ctiCOYZ8X6UjrEi5wyygeSwvWYR8NJ9dB5ju47sNL755joY0PaZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840141 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.840149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840176 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef021fd3--cbd8--5c2d--a302--cb22746ffda4-osd--block--ef021fd3--cbd8--5c2d--a302--cb22746ffda4', 'dm-uuid-LVM-T6jYMBRL7dp9tkdsf0p0IlQUxDmrOpuqqEA9KWjQ2sMW5BlNPfyg7hM5mY97Cgfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840197 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dd550970--9f11--5339--ba56--252bcc7474b0-osd--block--dd550970--9f11--5339--ba56--252bcc7474b0', 'dm-uuid-LVM-klrAyY2R3EDbTRZMGtUluIGdB92UKAjKW2w66lfYmOvHKm44Ai0bKUegTQpG4d7O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840246 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840255 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--54591ae8--508f--52be--a28f--431401b6d1ad-osd--block--54591ae8--508f--52be--a28f--431401b6d1ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YNXba9-HJ0W-oMiN-czED-sHHj-nVGE-aLrrmj', 'scsi-0QEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16', 'scsi-SQEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b4e7e843--affd--596c--a641--84f7bc574f86-osd--block--b4e7e843--affd--596c--a641--84f7bc574f86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-haDIf7-Emsh-nQyd-5J3H-Zscp-VTbV-fvYdrM', 'scsi-0QEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057', 'scsi-SQEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840320 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2', 'scsi-SQEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840364 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840372 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.840384 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840393 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840407 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840424 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ef021fd3--cbd8--5c2d--a302--cb22746ffda4-osd--block--ef021fd3--cbd8--5c2d--a302--cb22746ffda4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9WSgIJ-I51A-wqaD-NOuA-6Xc0-6Vv6-Zpuk9u', 'scsi-0QEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b', 'scsi-SQEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dd550970--9f11--5339--ba56--252bcc7474b0-osd--block--dd550970--9f11--5339--ba56--252bcc7474b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u7NTW9-427L-fzc2-K2VT-yHeG-nl1k-MmJnGV', 'scsi-0QEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c', 'scsi-SQEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--858e0050--0995--591a--8058--d3b9e3e91008-osd--block--858e0050--0995--591a--8058--d3b9e3e91008', 'dm-uuid-LVM-VIOu0bezJs3RTPoIDuH3unxVMbg1Fhi6gEIo2sRefuTOgjraOMKyPpDJ80xf1QI1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256', 'scsi-SQEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840472 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7a727a7b--5e51--5ff8--a4b5--d0215765e108-osd--block--7a727a7b--5e51--5ff8--a4b5--d0215765e108', 'dm-uuid-LVM-A9aLUKkNez0PszVsSh0FgEdmDyanvMJtkM1tAbeCrlF43Wkmx7BLo2d0HFHi7tUf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840509 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.840522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840552 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--858e0050--0995--591a--8058--d3b9e3e91008-osd--block--858e0050--0995--591a--8058--d3b9e3e91008'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-llnifo-6NnZ-JfLO-NBAB-djPH-cniD-jJCdVG', 'scsi-0QEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317', 'scsi-SQEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840749 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7a727a7b--5e51--5ff8--a4b5--d0215765e108-osd--block--7a727a7b--5e51--5ff8--a4b5--d0215765e108'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8FejOd-vDRE-VG31-PbtH-QBde-6z0E-hwqNI0', 'scsi-0QEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5', 'scsi-SQEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb', 'scsi-SQEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840780 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:31:02.840788 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.840796 | orchestrator | 2025-10-09 10:31:02.840805 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-10-09 10:31:02.840813 | orchestrator | Thursday 09 October 2025 10:20:05 +0000 (0:00:03.088) 0:00:50.968 ****** 2025-10-09 10:31:02.840821 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.840829 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.840837 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.840845 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.840853 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.840861 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.840869 | orchestrator | 2025-10-09 10:31:02.840877 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-10-09 10:31:02.840885 | orchestrator | Thursday 09 October 2025 10:20:07 +0000 (0:00:02.277) 0:00:53.245 ****** 2025-10-09 10:31:02.840893 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.840901 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.840909 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.840916 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.840924 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.840932 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.840940 | orchestrator | 2025-10-09 10:31:02.840948 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-09 10:31:02.840956 | orchestrator | Thursday 09 October 2025 10:20:09 +0000 (0:00:01.386) 0:00:54.632 ****** 2025-10-09 10:31:02.840964 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.840972 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.840980 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.840987 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.840995 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.841003 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.841011 | orchestrator | 2025-10-09 10:31:02.841019 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-09 10:31:02.841027 | orchestrator | Thursday 09 October 2025 10:20:10 +0000 (0:00:01.424) 0:00:56.056 ****** 2025-10-09 10:31:02.841034 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.841042 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.841050 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.841058 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.841066 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.841080 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.841088 | orchestrator | 2025-10-09 10:31:02.841096 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-09 10:31:02.841104 | orchestrator | Thursday 09 October 2025 10:20:11 +0000 (0:00:01.012) 0:00:57.069 ****** 2025-10-09 10:31:02.841112 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.841120 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.841128 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.841141 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.841149 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.841157 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.841165 | orchestrator | 2025-10-09 10:31:02.841173 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-09 10:31:02.841181 | orchestrator | Thursday 09 October 2025 10:20:13 +0000 (0:00:02.118) 0:00:59.187 ****** 2025-10-09 10:31:02.841189 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.841196 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.841204 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.841212 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.841237 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.841246 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.841254 | orchestrator | 2025-10-09 10:31:02.841261 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-10-09 10:31:02.841269 | orchestrator | Thursday 09 October 2025 10:20:15 +0000 (0:00:01.294) 0:01:00.482 ****** 2025-10-09 10:31:02.841277 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-10-09 10:31:02.841285 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-10-09 10:31:02.841293 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:31:02.841301 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-09 10:31:02.841309 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-10-09 10:31:02.841317 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-10-09 10:31:02.841324 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-10-09 10:31:02.841332 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-10-09 10:31:02.841340 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-09 10:31:02.841349 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-10-09 10:31:02.841358 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-10-09 10:31:02.841366 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-10-09 10:31:02.841375 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-10-09 10:31:02.841384 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-10-09 10:31:02.841393 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-10-09 10:31:02.841402 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-10-09 10:31:02.841411 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-10-09 10:31:02.841420 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-10-09 10:31:02.841429 | orchestrator | 2025-10-09 10:31:02.841438 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-10-09 10:31:02.841447 | orchestrator | Thursday 09 October 2025 10:20:22 +0000 (0:00:06.977) 0:01:07.459 ****** 2025-10-09 10:31:02.841456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:31:02.841464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:31:02.841473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:31:02.841482 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.841490 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-10-09 10:31:02.841499 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-10-09 10:31:02.841508 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-10-09 10:31:02.841517 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.841531 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-10-09 10:31:02.841540 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-10-09 10:31:02.841548 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-10-09 10:31:02.841557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-09 10:31:02.841566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-09 10:31:02.841598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-09 10:31:02.841607 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.841617 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.841625 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-09 10:31:02.841634 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-09 10:31:02.841643 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-09 10:31:02.841652 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.841661 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-09 10:31:02.841669 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-09 10:31:02.841678 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-09 10:31:02.841687 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.841695 | orchestrator | 2025-10-09 10:31:02.841705 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-10-09 10:31:02.841712 | orchestrator | Thursday 09 October 2025 10:20:23 +0000 (0:00:01.659) 0:01:09.118 ****** 2025-10-09 10:31:02.841720 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.841728 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.841735 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.841744 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.841752 | orchestrator | 2025-10-09 10:31:02.841760 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-09 10:31:02.841772 | orchestrator | Thursday 09 October 2025 10:20:25 +0000 (0:00:01.798) 0:01:10.917 ****** 2025-10-09 10:31:02.841780 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.841788 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.841796 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.841803 | orchestrator | 2025-10-09 10:31:02.841811 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-09 10:31:02.841823 | orchestrator | Thursday 09 October 2025 10:20:26 +0000 (0:00:00.476) 0:01:11.393 ****** 2025-10-09 10:31:02.841832 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.841840 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.841847 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.841855 | orchestrator | 2025-10-09 10:31:02.841863 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-09 10:31:02.841871 | orchestrator | Thursday 09 October 2025 10:20:26 +0000 (0:00:00.667) 0:01:12.061 ****** 2025-10-09 10:31:02.841879 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.841887 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.841894 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.841902 | orchestrator | 2025-10-09 10:31:02.841910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-09 10:31:02.841918 | orchestrator | Thursday 09 October 2025 10:20:27 +0000 (0:00:01.151) 0:01:13.213 ****** 2025-10-09 10:31:02.841926 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.841934 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.841942 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.841950 | orchestrator | 2025-10-09 10:31:02.841957 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-09 10:31:02.841965 | orchestrator | Thursday 09 October 2025 10:20:30 +0000 (0:00:02.107) 0:01:15.320 ****** 2025-10-09 10:31:02.841979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.841986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.841994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.842002 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.842010 | orchestrator | 2025-10-09 10:31:02.842044 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-09 10:31:02.842052 | orchestrator | Thursday 09 October 2025 10:20:30 +0000 (0:00:00.799) 0:01:16.120 ****** 2025-10-09 10:31:02.842060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.842068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.842076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.842083 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.842091 | orchestrator | 2025-10-09 10:31:02.842099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-09 10:31:02.842107 | orchestrator | Thursday 09 October 2025 10:20:32 +0000 (0:00:01.225) 0:01:17.346 ****** 2025-10-09 10:31:02.842115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.842123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.842130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.842138 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.842146 | orchestrator | 2025-10-09 10:31:02.842154 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-09 10:31:02.842162 | orchestrator | Thursday 09 October 2025 10:20:33 +0000 (0:00:01.069) 0:01:18.415 ****** 2025-10-09 10:31:02.842170 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.842177 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.842185 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.842193 | orchestrator | 2025-10-09 10:31:02.842201 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-09 10:31:02.842208 | orchestrator | Thursday 09 October 2025 10:20:33 +0000 (0:00:00.651) 0:01:19.066 ****** 2025-10-09 10:31:02.842262 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-09 10:31:02.842272 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-09 10:31:02.842280 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-09 10:31:02.842288 | orchestrator | 2025-10-09 10:31:02.842295 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-10-09 10:31:02.842303 | orchestrator | Thursday 09 October 2025 10:20:35 +0000 (0:00:01.525) 0:01:20.592 ****** 2025-10-09 10:31:02.842311 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:31:02.842319 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:31:02.842327 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:31:02.842335 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-10-09 10:31:02.842343 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-09 10:31:02.842351 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-09 10:31:02.842358 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-09 10:31:02.842366 | orchestrator | 2025-10-09 10:31:02.842374 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-10-09 10:31:02.842382 | orchestrator | Thursday 09 October 2025 10:20:37 +0000 (0:00:02.266) 0:01:22.859 ****** 2025-10-09 10:31:02.842390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:31:02.842397 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:31:02.842405 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:31:02.842422 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-10-09 10:31:02.842430 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-09 10:31:02.842442 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-09 10:31:02.842450 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-09 10:31:02.842458 | orchestrator | 2025-10-09 10:31:02.842466 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:31:02.842474 | orchestrator | Thursday 09 October 2025 10:20:39 +0000 (0:00:02.306) 0:01:25.165 ****** 2025-10-09 10:31:02.842492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.842501 | orchestrator | 2025-10-09 10:31:02.842509 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:31:02.842517 | orchestrator | Thursday 09 October 2025 10:20:41 +0000 (0:00:01.694) 0:01:26.860 ****** 2025-10-09 10:31:02.842525 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.842533 | orchestrator | 2025-10-09 10:31:02.842541 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:31:02.842549 | orchestrator | Thursday 09 October 2025 10:20:43 +0000 (0:00:01.974) 0:01:28.835 ****** 2025-10-09 10:31:02.842557 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.842565 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.842572 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.842580 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.842588 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.842596 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.842604 | orchestrator | 2025-10-09 10:31:02.842611 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:31:02.842619 | orchestrator | Thursday 09 October 2025 10:20:45 +0000 (0:00:01.895) 0:01:30.730 ****** 2025-10-09 10:31:02.842627 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.842635 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.842643 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.842651 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.842659 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.842667 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.842674 | orchestrator | 2025-10-09 10:31:02.842682 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:31:02.842690 | orchestrator | Thursday 09 October 2025 10:20:47 +0000 (0:00:02.199) 0:01:32.929 ****** 2025-10-09 10:31:02.842698 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.842705 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.842712 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.842719 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.842725 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.842732 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.842739 | orchestrator | 2025-10-09 10:31:02.842745 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:31:02.842752 | orchestrator | Thursday 09 October 2025 10:20:48 +0000 (0:00:01.263) 0:01:34.193 ****** 2025-10-09 10:31:02.842759 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.842765 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.842772 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.842778 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.842785 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.842809 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.842817 | orchestrator | 2025-10-09 10:31:02.842823 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:31:02.842830 | orchestrator | Thursday 09 October 2025 10:20:49 +0000 (0:00:01.048) 0:01:35.241 ****** 2025-10-09 10:31:02.842842 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.842849 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.842856 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.842862 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.842869 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.842876 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.842882 | orchestrator | 2025-10-09 10:31:02.842889 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:31:02.842896 | orchestrator | Thursday 09 October 2025 10:20:51 +0000 (0:00:01.346) 0:01:36.588 ****** 2025-10-09 10:31:02.842903 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.842909 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.842916 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.842922 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.842929 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.842936 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.842942 | orchestrator | 2025-10-09 10:31:02.842949 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:31:02.842956 | orchestrator | Thursday 09 October 2025 10:20:52 +0000 (0:00:00.928) 0:01:37.516 ****** 2025-10-09 10:31:02.842962 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.842969 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.842975 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.842982 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.842989 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.842995 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.843002 | orchestrator | 2025-10-09 10:31:02.843009 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:31:02.843015 | orchestrator | Thursday 09 October 2025 10:20:52 +0000 (0:00:00.682) 0:01:38.199 ****** 2025-10-09 10:31:02.843022 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.843029 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.843035 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.843042 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.843049 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.843055 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.843062 | orchestrator | 2025-10-09 10:31:02.843069 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:31:02.843075 | orchestrator | Thursday 09 October 2025 10:20:54 +0000 (0:00:01.432) 0:01:39.632 ****** 2025-10-09 10:31:02.843082 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.843089 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.843095 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.843102 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.843114 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.843121 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.843128 | orchestrator | 2025-10-09 10:31:02.843134 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:31:02.843141 | orchestrator | Thursday 09 October 2025 10:20:55 +0000 (0:00:01.250) 0:01:40.883 ****** 2025-10-09 10:31:02.843148 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.843158 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.843165 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.843172 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.843178 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.843185 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.843191 | orchestrator | 2025-10-09 10:31:02.843198 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:31:02.843205 | orchestrator | Thursday 09 October 2025 10:20:56 +0000 (0:00:00.809) 0:01:41.692 ****** 2025-10-09 10:31:02.843211 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.843232 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.843239 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.843251 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.843258 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.843264 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.843271 | orchestrator | 2025-10-09 10:31:02.843278 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:31:02.843285 | orchestrator | Thursday 09 October 2025 10:20:57 +0000 (0:00:00.642) 0:01:42.335 ****** 2025-10-09 10:31:02.843291 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.843298 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.843305 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.843311 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.843318 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.843324 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.843331 | orchestrator | 2025-10-09 10:31:02.843338 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:31:02.843345 | orchestrator | Thursday 09 October 2025 10:20:57 +0000 (0:00:00.896) 0:01:43.231 ****** 2025-10-09 10:31:02.843351 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.843358 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.843365 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.843371 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.843378 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.843385 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.843391 | orchestrator | 2025-10-09 10:31:02.843398 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:31:02.843405 | orchestrator | Thursday 09 October 2025 10:20:58 +0000 (0:00:00.634) 0:01:43.866 ****** 2025-10-09 10:31:02.843411 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.843418 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.843425 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.843432 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.843438 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.843445 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.843452 | orchestrator | 2025-10-09 10:31:02.843458 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:31:02.843465 | orchestrator | Thursday 09 October 2025 10:20:59 +0000 (0:00:00.882) 0:01:44.749 ****** 2025-10-09 10:31:02.843472 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.843478 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.843485 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.843492 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.843498 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.843505 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.843512 | orchestrator | 2025-10-09 10:31:02.843519 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:31:02.843525 | orchestrator | Thursday 09 October 2025 10:21:00 +0000 (0:00:00.581) 0:01:45.331 ****** 2025-10-09 10:31:02.843532 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.843539 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.843545 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.843552 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.843559 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.843565 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.843572 | orchestrator | 2025-10-09 10:31:02.843579 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:31:02.843585 | orchestrator | Thursday 09 October 2025 10:21:01 +0000 (0:00:00.984) 0:01:46.316 ****** 2025-10-09 10:31:02.843592 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.843599 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.843605 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.843612 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.843619 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.843625 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.843632 | orchestrator | 2025-10-09 10:31:02.843639 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:31:02.843650 | orchestrator | Thursday 09 October 2025 10:21:01 +0000 (0:00:00.676) 0:01:46.992 ****** 2025-10-09 10:31:02.843658 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.843664 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.843671 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.843678 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.843684 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.843691 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.843697 | orchestrator | 2025-10-09 10:31:02.843704 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:31:02.843711 | orchestrator | Thursday 09 October 2025 10:21:02 +0000 (0:00:00.893) 0:01:47.886 ****** 2025-10-09 10:31:02.843718 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.843724 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.843731 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.843737 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.843744 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.843751 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.843757 | orchestrator | 2025-10-09 10:31:02.843764 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-10-09 10:31:02.843771 | orchestrator | Thursday 09 October 2025 10:21:04 +0000 (0:00:01.685) 0:01:49.571 ****** 2025-10-09 10:31:02.843778 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.843784 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.843791 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.843801 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.843808 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.843815 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.843821 | orchestrator | 2025-10-09 10:31:02.843828 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-10-09 10:31:02.843835 | orchestrator | Thursday 09 October 2025 10:21:05 +0000 (0:00:01.645) 0:01:51.217 ****** 2025-10-09 10:31:02.843845 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.843852 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.843859 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.843865 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.843872 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.843879 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.843885 | orchestrator | 2025-10-09 10:31:02.843892 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-10-09 10:31:02.843899 | orchestrator | Thursday 09 October 2025 10:21:08 +0000 (0:00:02.366) 0:01:53.584 ****** 2025-10-09 10:31:02.843906 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.843913 | orchestrator | 2025-10-09 10:31:02.843920 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-10-09 10:31:02.843926 | orchestrator | Thursday 09 October 2025 10:21:09 +0000 (0:00:01.244) 0:01:54.828 ****** 2025-10-09 10:31:02.843933 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.843940 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.843946 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.843953 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.843960 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.843966 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.843973 | orchestrator | 2025-10-09 10:31:02.843980 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-10-09 10:31:02.843987 | orchestrator | Thursday 09 October 2025 10:21:10 +0000 (0:00:00.631) 0:01:55.460 ****** 2025-10-09 10:31:02.843993 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844000 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844007 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844013 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844030 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844037 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844043 | orchestrator | 2025-10-09 10:31:02.844050 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-10-09 10:31:02.844057 | orchestrator | Thursday 09 October 2025 10:21:11 +0000 (0:00:00.846) 0:01:56.307 ****** 2025-10-09 10:31:02.844064 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:31:02.844071 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:31:02.844077 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:31:02.844084 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:31:02.844091 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:31:02.844098 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:31:02.844104 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:31:02.844111 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:31:02.844118 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:31:02.844124 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:31:02.844131 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:31:02.844137 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:31:02.844144 | orchestrator | 2025-10-09 10:31:02.844151 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-10-09 10:31:02.844157 | orchestrator | Thursday 09 October 2025 10:21:12 +0000 (0:00:01.234) 0:01:57.541 ****** 2025-10-09 10:31:02.844164 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.844171 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.844178 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.844184 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.844191 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.844198 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.844204 | orchestrator | 2025-10-09 10:31:02.844211 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-10-09 10:31:02.844230 | orchestrator | Thursday 09 October 2025 10:21:13 +0000 (0:00:01.035) 0:01:58.577 ****** 2025-10-09 10:31:02.844237 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844244 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844250 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844257 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844264 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844270 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844277 | orchestrator | 2025-10-09 10:31:02.844284 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-10-09 10:31:02.844290 | orchestrator | Thursday 09 October 2025 10:21:13 +0000 (0:00:00.533) 0:01:59.111 ****** 2025-10-09 10:31:02.844297 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844304 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844311 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844317 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844324 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844330 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844337 | orchestrator | 2025-10-09 10:31:02.844344 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-10-09 10:31:02.844354 | orchestrator | Thursday 09 October 2025 10:21:14 +0000 (0:00:00.724) 0:01:59.835 ****** 2025-10-09 10:31:02.844360 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844367 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844378 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844385 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844392 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844402 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844409 | orchestrator | 2025-10-09 10:31:02.844415 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-10-09 10:31:02.844422 | orchestrator | Thursday 09 October 2025 10:21:15 +0000 (0:00:00.558) 0:02:00.393 ****** 2025-10-09 10:31:02.844429 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.844436 | orchestrator | 2025-10-09 10:31:02.844443 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-10-09 10:31:02.844450 | orchestrator | Thursday 09 October 2025 10:21:16 +0000 (0:00:01.077) 0:02:01.471 ****** 2025-10-09 10:31:02.844456 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.844463 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.844470 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.844476 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.844483 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.844489 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.844496 | orchestrator | 2025-10-09 10:31:02.844503 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-10-09 10:31:02.844509 | orchestrator | Thursday 09 October 2025 10:22:07 +0000 (0:00:51.650) 0:02:53.122 ****** 2025-10-09 10:31:02.844516 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:31:02.844523 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:31:02.844529 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:31:02.844536 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844543 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:31:02.844549 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:31:02.844556 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:31:02.844563 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844569 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:31:02.844576 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:31:02.844583 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:31:02.844589 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844596 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:31:02.844603 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:31:02.844610 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:31:02.844616 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844623 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:31:02.844630 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:31:02.844637 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:31:02.844643 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:31:02.844650 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:31:02.844657 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:31:02.844663 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844670 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844677 | orchestrator | 2025-10-09 10:31:02.844683 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-10-09 10:31:02.844695 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:00.658) 0:02:53.780 ****** 2025-10-09 10:31:02.844702 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844709 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844715 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844722 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844728 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844735 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844742 | orchestrator | 2025-10-09 10:31:02.844748 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-10-09 10:31:02.844755 | orchestrator | Thursday 09 October 2025 10:22:09 +0000 (0:00:00.726) 0:02:54.507 ****** 2025-10-09 10:31:02.844762 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844768 | orchestrator | 2025-10-09 10:31:02.844775 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-10-09 10:31:02.844782 | orchestrator | Thursday 09 October 2025 10:22:09 +0000 (0:00:00.146) 0:02:54.654 ****** 2025-10-09 10:31:02.844788 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844795 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844802 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844809 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844815 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844822 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844828 | orchestrator | 2025-10-09 10:31:02.844835 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-10-09 10:31:02.844842 | orchestrator | Thursday 09 October 2025 10:22:09 +0000 (0:00:00.626) 0:02:55.280 ****** 2025-10-09 10:31:02.844849 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844856 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844866 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844873 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844879 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844886 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844893 | orchestrator | 2025-10-09 10:31:02.844899 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-10-09 10:31:02.844909 | orchestrator | Thursday 09 October 2025 10:22:11 +0000 (0:00:01.022) 0:02:56.303 ****** 2025-10-09 10:31:02.844916 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.844923 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.844930 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.844936 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.844943 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.844950 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.844956 | orchestrator | 2025-10-09 10:31:02.844963 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-10-09 10:31:02.844970 | orchestrator | Thursday 09 October 2025 10:22:12 +0000 (0:00:01.036) 0:02:57.339 ****** 2025-10-09 10:31:02.844976 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.844983 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.844990 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.844997 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.845003 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.845010 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.845016 | orchestrator | 2025-10-09 10:31:02.845023 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-10-09 10:31:02.845030 | orchestrator | Thursday 09 October 2025 10:22:14 +0000 (0:00:02.378) 0:02:59.718 ****** 2025-10-09 10:31:02.845036 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.845043 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.845050 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.845056 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.845063 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.845069 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.845082 | orchestrator | 2025-10-09 10:31:02.845089 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-10-09 10:31:02.845095 | orchestrator | Thursday 09 October 2025 10:22:15 +0000 (0:00:00.651) 0:03:00.369 ****** 2025-10-09 10:31:02.845102 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.845110 | orchestrator | 2025-10-09 10:31:02.845117 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-10-09 10:31:02.845124 | orchestrator | Thursday 09 October 2025 10:22:16 +0000 (0:00:01.274) 0:03:01.644 ****** 2025-10-09 10:31:02.845130 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.845137 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.845143 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.845150 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.845157 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.845163 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.845170 | orchestrator | 2025-10-09 10:31:02.845177 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-10-09 10:31:02.845184 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:00.677) 0:03:02.322 ****** 2025-10-09 10:31:02.845190 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.845197 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.845204 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.845210 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.845247 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.845255 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.845262 | orchestrator | 2025-10-09 10:31:02.845268 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-10-09 10:31:02.845275 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:00.885) 0:03:03.207 ****** 2025-10-09 10:31:02.845282 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.845288 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.845294 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.845300 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.845306 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.845312 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.845318 | orchestrator | 2025-10-09 10:31:02.845325 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-10-09 10:31:02.845331 | orchestrator | Thursday 09 October 2025 10:22:18 +0000 (0:00:00.756) 0:03:03.964 ****** 2025-10-09 10:31:02.845337 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.845343 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.845349 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.845355 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.845361 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.845368 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.845374 | orchestrator | 2025-10-09 10:31:02.845380 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-10-09 10:31:02.845386 | orchestrator | Thursday 09 October 2025 10:22:19 +0000 (0:00:00.995) 0:03:04.959 ****** 2025-10-09 10:31:02.845392 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.845398 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.845404 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.845411 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.845417 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.845423 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.845429 | orchestrator | 2025-10-09 10:31:02.845435 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-10-09 10:31:02.845441 | orchestrator | Thursday 09 October 2025 10:22:20 +0000 (0:00:00.800) 0:03:05.760 ****** 2025-10-09 10:31:02.845448 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.845454 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.845465 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.845471 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.845477 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.845483 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.845489 | orchestrator | 2025-10-09 10:31:02.845496 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-10-09 10:31:02.845502 | orchestrator | Thursday 09 October 2025 10:22:21 +0000 (0:00:00.911) 0:03:06.671 ****** 2025-10-09 10:31:02.845511 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.845518 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.845524 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.845530 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.845536 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.845542 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.845548 | orchestrator | 2025-10-09 10:31:02.845558 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-10-09 10:31:02.845564 | orchestrator | Thursday 09 October 2025 10:22:22 +0000 (0:00:00.729) 0:03:07.401 ****** 2025-10-09 10:31:02.845570 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.845577 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.845583 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.845589 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.845595 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.845601 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.845607 | orchestrator | 2025-10-09 10:31:02.845613 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-10-09 10:31:02.845619 | orchestrator | Thursday 09 October 2025 10:22:22 +0000 (0:00:00.871) 0:03:08.272 ****** 2025-10-09 10:31:02.845625 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.845631 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.845637 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.845644 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.845650 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.845656 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.845662 | orchestrator | 2025-10-09 10:31:02.845668 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-10-09 10:31:02.845674 | orchestrator | Thursday 09 October 2025 10:22:24 +0000 (0:00:01.497) 0:03:09.770 ****** 2025-10-09 10:31:02.845681 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.845687 | orchestrator | 2025-10-09 10:31:02.845693 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-10-09 10:31:02.845699 | orchestrator | Thursday 09 October 2025 10:22:25 +0000 (0:00:01.466) 0:03:11.236 ****** 2025-10-09 10:31:02.845705 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-10-09 10:31:02.845712 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-10-09 10:31:02.845718 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-10-09 10:31:02.845724 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-10-09 10:31:02.845730 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-10-09 10:31:02.845736 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-10-09 10:31:02.845742 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-10-09 10:31:02.845748 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-10-09 10:31:02.845754 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-10-09 10:31:02.845760 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-10-09 10:31:02.845766 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-10-09 10:31:02.845773 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-10-09 10:31:02.845779 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-10-09 10:31:02.845785 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-10-09 10:31:02.845795 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-10-09 10:31:02.845802 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-10-09 10:31:02.845808 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-10-09 10:31:02.845814 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-10-09 10:31:02.845820 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-10-09 10:31:02.845826 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-10-09 10:31:02.845832 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-10-09 10:31:02.845838 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-10-09 10:31:02.845844 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-10-09 10:31:02.845851 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-10-09 10:31:02.845857 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-10-09 10:31:02.845863 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-10-09 10:31:02.845869 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-10-09 10:31:02.845875 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-10-09 10:31:02.845881 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-10-09 10:31:02.845887 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-10-09 10:31:02.845893 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-10-09 10:31:02.845899 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-10-09 10:31:02.845905 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-10-09 10:31:02.845911 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-10-09 10:31:02.845917 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-10-09 10:31:02.845923 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-10-09 10:31:02.845929 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-10-09 10:31:02.845935 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-10-09 10:31:02.845941 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-10-09 10:31:02.845948 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:31:02.845954 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:31:02.845963 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-10-09 10:31:02.845969 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:31:02.845975 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:31:02.845982 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-10-09 10:31:02.845991 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:31:02.845997 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:31:02.846004 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:31:02.846010 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:31:02.846043 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-10-09 10:31:02.846050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:31:02.846056 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:31:02.846063 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:31:02.846069 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:31:02.846075 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:31:02.846081 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:31:02.846091 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:31:02.846098 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:31:02.846104 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:31:02.846110 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:31:02.846116 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:31:02.846122 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:31:02.846128 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:31:02.846134 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:31:02.846140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:31:02.846146 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:31:02.846152 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:31:02.846159 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:31:02.846165 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:31:02.846171 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:31:02.846177 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:31:02.846183 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:31:02.846189 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:31:02.846195 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:31:02.846202 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:31:02.846208 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:31:02.846214 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:31:02.846231 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:31:02.846237 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:31:02.846243 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:31:02.846249 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-10-09 10:31:02.846255 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:31:02.846261 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-10-09 10:31:02.846267 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-10-09 10:31:02.846273 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:31:02.846280 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-10-09 10:31:02.846286 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-10-09 10:31:02.846292 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:31:02.846298 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-10-09 10:31:02.846304 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-10-09 10:31:02.846310 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:31:02.846316 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-10-09 10:31:02.846323 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-10-09 10:31:02.846329 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-10-09 10:31:02.846335 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-10-09 10:31:02.846341 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-10-09 10:31:02.846347 | orchestrator | 2025-10-09 10:31:02.846360 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-10-09 10:31:02.846370 | orchestrator | Thursday 09 October 2025 10:22:33 +0000 (0:00:07.459) 0:03:18.695 ****** 2025-10-09 10:31:02.846376 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846383 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846389 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846395 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.846401 | orchestrator | 2025-10-09 10:31:02.846415 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-10-09 10:31:02.846422 | orchestrator | Thursday 09 October 2025 10:22:34 +0000 (0:00:01.446) 0:03:20.142 ****** 2025-10-09 10:31:02.846428 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.846436 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.846442 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.846448 | orchestrator | 2025-10-09 10:31:02.846454 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-10-09 10:31:02.846461 | orchestrator | Thursday 09 October 2025 10:22:35 +0000 (0:00:00.952) 0:03:21.094 ****** 2025-10-09 10:31:02.846467 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.846473 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.846479 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.846485 | orchestrator | 2025-10-09 10:31:02.846492 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-10-09 10:31:02.846498 | orchestrator | Thursday 09 October 2025 10:22:37 +0000 (0:00:01.790) 0:03:22.885 ****** 2025-10-09 10:31:02.846504 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846510 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846516 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846522 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.846528 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.846534 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.846540 | orchestrator | 2025-10-09 10:31:02.846547 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-10-09 10:31:02.846553 | orchestrator | Thursday 09 October 2025 10:22:38 +0000 (0:00:00.880) 0:03:23.766 ****** 2025-10-09 10:31:02.846559 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846565 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846571 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846577 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.846583 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.846589 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.846596 | orchestrator | 2025-10-09 10:31:02.846602 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-10-09 10:31:02.846608 | orchestrator | Thursday 09 October 2025 10:22:40 +0000 (0:00:01.846) 0:03:25.612 ****** 2025-10-09 10:31:02.846614 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846620 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846626 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846632 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.846638 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.846644 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.846650 | orchestrator | 2025-10-09 10:31:02.846657 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-10-09 10:31:02.846667 | orchestrator | Thursday 09 October 2025 10:22:41 +0000 (0:00:00.711) 0:03:26.324 ****** 2025-10-09 10:31:02.846673 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846679 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846685 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846692 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.846698 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.846704 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.846710 | orchestrator | 2025-10-09 10:31:02.846716 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-10-09 10:31:02.846722 | orchestrator | Thursday 09 October 2025 10:22:42 +0000 (0:00:01.041) 0:03:27.365 ****** 2025-10-09 10:31:02.846728 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846734 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846740 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846746 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.846752 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.846758 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.846764 | orchestrator | 2025-10-09 10:31:02.846771 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-10-09 10:31:02.846777 | orchestrator | Thursday 09 October 2025 10:22:42 +0000 (0:00:00.797) 0:03:28.162 ****** 2025-10-09 10:31:02.846783 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846789 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846795 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.846801 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846807 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.846813 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.846819 | orchestrator | 2025-10-09 10:31:02.846825 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-10-09 10:31:02.846832 | orchestrator | Thursday 09 October 2025 10:22:44 +0000 (0:00:01.270) 0:03:29.433 ****** 2025-10-09 10:31:02.846838 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846844 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846850 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846859 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.846865 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.846871 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.846877 | orchestrator | 2025-10-09 10:31:02.846884 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-10-09 10:31:02.846893 | orchestrator | Thursday 09 October 2025 10:22:45 +0000 (0:00:01.013) 0:03:30.446 ****** 2025-10-09 10:31:02.846899 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846906 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846912 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846918 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.846924 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.846930 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.846936 | orchestrator | 2025-10-09 10:31:02.846943 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-10-09 10:31:02.846949 | orchestrator | Thursday 09 October 2025 10:22:45 +0000 (0:00:00.738) 0:03:31.185 ****** 2025-10-09 10:31:02.846955 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.846961 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.846968 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.846974 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.846980 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.846986 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.846992 | orchestrator | 2025-10-09 10:31:02.846998 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-10-09 10:31:02.847005 | orchestrator | Thursday 09 October 2025 10:22:50 +0000 (0:00:04.124) 0:03:35.310 ****** 2025-10-09 10:31:02.847015 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847021 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847027 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847033 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.847040 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.847046 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.847052 | orchestrator | 2025-10-09 10:31:02.847058 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-10-09 10:31:02.847065 | orchestrator | Thursday 09 October 2025 10:22:51 +0000 (0:00:01.124) 0:03:36.434 ****** 2025-10-09 10:31:02.847071 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847077 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847083 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847089 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.847095 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.847102 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.847108 | orchestrator | 2025-10-09 10:31:02.847114 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-10-09 10:31:02.847120 | orchestrator | Thursday 09 October 2025 10:22:52 +0000 (0:00:01.374) 0:03:37.808 ****** 2025-10-09 10:31:02.847127 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847133 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847139 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847145 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.847151 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.847157 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.847163 | orchestrator | 2025-10-09 10:31:02.847170 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-10-09 10:31:02.847176 | orchestrator | Thursday 09 October 2025 10:22:53 +0000 (0:00:00.867) 0:03:38.676 ****** 2025-10-09 10:31:02.847182 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847188 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847194 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847201 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.847207 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.847213 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.847231 | orchestrator | 2025-10-09 10:31:02.847238 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-10-09 10:31:02.847244 | orchestrator | Thursday 09 October 2025 10:22:54 +0000 (0:00:01.329) 0:03:40.005 ****** 2025-10-09 10:31:02.847250 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847256 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847263 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847269 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-10-09 10:31:02.847277 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-10-09 10:31:02.847285 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-10-09 10:31:02.847299 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-10-09 10:31:02.847305 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.847312 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.847322 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-10-09 10:31:02.847329 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-10-09 10:31:02.847335 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.847341 | orchestrator | 2025-10-09 10:31:02.847348 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-10-09 10:31:02.847354 | orchestrator | Thursday 09 October 2025 10:22:55 +0000 (0:00:00.971) 0:03:40.977 ****** 2025-10-09 10:31:02.847360 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847366 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847372 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847378 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.847384 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.847391 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.847397 | orchestrator | 2025-10-09 10:31:02.847403 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-10-09 10:31:02.847409 | orchestrator | Thursday 09 October 2025 10:22:56 +0000 (0:00:01.202) 0:03:42.179 ****** 2025-10-09 10:31:02.847415 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847421 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847428 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847434 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.847440 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.847446 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.847452 | orchestrator | 2025-10-09 10:31:02.847458 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-09 10:31:02.847465 | orchestrator | Thursday 09 October 2025 10:22:57 +0000 (0:00:00.835) 0:03:43.014 ****** 2025-10-09 10:31:02.847471 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847477 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847483 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847489 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.847495 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.847501 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.847508 | orchestrator | 2025-10-09 10:31:02.847514 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-09 10:31:02.847520 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:01.111) 0:03:44.126 ****** 2025-10-09 10:31:02.847526 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847532 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847538 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847545 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.847551 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.847557 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.847563 | orchestrator | 2025-10-09 10:31:02.847569 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-09 10:31:02.847579 | orchestrator | Thursday 09 October 2025 10:22:59 +0000 (0:00:00.771) 0:03:44.897 ****** 2025-10-09 10:31:02.847585 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847591 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847598 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847604 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.847610 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.847616 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.847622 | orchestrator | 2025-10-09 10:31:02.847628 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-09 10:31:02.847635 | orchestrator | Thursday 09 October 2025 10:23:00 +0000 (0:00:01.335) 0:03:46.233 ****** 2025-10-09 10:31:02.847641 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847647 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847653 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847659 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.847665 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.847671 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.847678 | orchestrator | 2025-10-09 10:31:02.847684 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-09 10:31:02.847690 | orchestrator | Thursday 09 October 2025 10:23:01 +0000 (0:00:01.035) 0:03:47.268 ****** 2025-10-09 10:31:02.847696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-09 10:31:02.847702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-09 10:31:02.847708 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-09 10:31:02.847715 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847721 | orchestrator | 2025-10-09 10:31:02.847727 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-09 10:31:02.847733 | orchestrator | Thursday 09 October 2025 10:23:02 +0000 (0:00:00.883) 0:03:48.151 ****** 2025-10-09 10:31:02.847739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-09 10:31:02.847745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-09 10:31:02.847751 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-09 10:31:02.847758 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847764 | orchestrator | 2025-10-09 10:31:02.847773 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-09 10:31:02.847779 | orchestrator | Thursday 09 October 2025 10:23:03 +0000 (0:00:01.073) 0:03:49.225 ****** 2025-10-09 10:31:02.847786 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-09 10:31:02.847795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-09 10:31:02.847801 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-09 10:31:02.847808 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847814 | orchestrator | 2025-10-09 10:31:02.847820 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-09 10:31:02.847826 | orchestrator | Thursday 09 October 2025 10:23:04 +0000 (0:00:00.542) 0:03:49.767 ****** 2025-10-09 10:31:02.847832 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847838 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847844 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847851 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.847857 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.847863 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.847869 | orchestrator | 2025-10-09 10:31:02.847875 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-09 10:31:02.847881 | orchestrator | Thursday 09 October 2025 10:23:05 +0000 (0:00:01.197) 0:03:50.965 ****** 2025-10-09 10:31:02.847887 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-10-09 10:31:02.847894 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.847900 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-10-09 10:31:02.847906 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.847916 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-09 10:31:02.847922 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-10-09 10:31:02.847928 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.847934 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-09 10:31:02.847940 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-09 10:31:02.847946 | orchestrator | 2025-10-09 10:31:02.847952 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-10-09 10:31:02.847959 | orchestrator | Thursday 09 October 2025 10:23:09 +0000 (0:00:03.539) 0:03:54.504 ****** 2025-10-09 10:31:02.847965 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.847971 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.847977 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.847983 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.847989 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.847995 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.848001 | orchestrator | 2025-10-09 10:31:02.848007 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:31:02.848014 | orchestrator | Thursday 09 October 2025 10:23:13 +0000 (0:00:03.977) 0:03:58.482 ****** 2025-10-09 10:31:02.848020 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.848026 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.848032 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.848038 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.848044 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.848050 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.848056 | orchestrator | 2025-10-09 10:31:02.848062 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-10-09 10:31:02.848068 | orchestrator | Thursday 09 October 2025 10:23:15 +0000 (0:00:01.853) 0:04:00.336 ****** 2025-10-09 10:31:02.848074 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848081 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.848087 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.848093 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.848099 | orchestrator | 2025-10-09 10:31:02.848105 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-10-09 10:31:02.848111 | orchestrator | Thursday 09 October 2025 10:23:16 +0000 (0:00:01.340) 0:04:01.676 ****** 2025-10-09 10:31:02.848117 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.848124 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.848130 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.848136 | orchestrator | 2025-10-09 10:31:02.848142 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-10-09 10:31:02.848148 | orchestrator | Thursday 09 October 2025 10:23:16 +0000 (0:00:00.441) 0:04:02.118 ****** 2025-10-09 10:31:02.848154 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.848161 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.848167 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.848173 | orchestrator | 2025-10-09 10:31:02.848179 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-10-09 10:31:02.848185 | orchestrator | Thursday 09 October 2025 10:23:18 +0000 (0:00:01.714) 0:04:03.832 ****** 2025-10-09 10:31:02.848191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:31:02.848197 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:31:02.848203 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:31:02.848209 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.848226 | orchestrator | 2025-10-09 10:31:02.848232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-10-09 10:31:02.848238 | orchestrator | Thursday 09 October 2025 10:23:20 +0000 (0:00:01.809) 0:04:05.642 ****** 2025-10-09 10:31:02.848244 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.848251 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.848261 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.848267 | orchestrator | 2025-10-09 10:31:02.848273 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-10-09 10:31:02.848279 | orchestrator | Thursday 09 October 2025 10:23:21 +0000 (0:00:00.786) 0:04:06.429 ****** 2025-10-09 10:31:02.848286 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.848292 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.848298 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.848304 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-4, testbed-node-3, testbed-node-5 2025-10-09 10:31:02.848310 | orchestrator | 2025-10-09 10:31:02.848320 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-10-09 10:31:02.848326 | orchestrator | Thursday 09 October 2025 10:23:22 +0000 (0:00:01.278) 0:04:07.707 ****** 2025-10-09 10:31:02.848332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.848339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.848348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.848354 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848361 | orchestrator | 2025-10-09 10:31:02.848367 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-10-09 10:31:02.848373 | orchestrator | Thursday 09 October 2025 10:23:23 +0000 (0:00:00.820) 0:04:08.528 ****** 2025-10-09 10:31:02.848379 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848385 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.848392 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.848398 | orchestrator | 2025-10-09 10:31:02.848404 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-10-09 10:31:02.848410 | orchestrator | Thursday 09 October 2025 10:23:24 +0000 (0:00:00.872) 0:04:09.401 ****** 2025-10-09 10:31:02.848416 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848422 | orchestrator | 2025-10-09 10:31:02.848428 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-10-09 10:31:02.848434 | orchestrator | Thursday 09 October 2025 10:23:24 +0000 (0:00:00.265) 0:04:09.667 ****** 2025-10-09 10:31:02.848441 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848447 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.848453 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.848459 | orchestrator | 2025-10-09 10:31:02.848465 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-10-09 10:31:02.848471 | orchestrator | Thursday 09 October 2025 10:23:24 +0000 (0:00:00.402) 0:04:10.069 ****** 2025-10-09 10:31:02.848477 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848484 | orchestrator | 2025-10-09 10:31:02.848490 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-10-09 10:31:02.848496 | orchestrator | Thursday 09 October 2025 10:23:24 +0000 (0:00:00.214) 0:04:10.283 ****** 2025-10-09 10:31:02.848502 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848508 | orchestrator | 2025-10-09 10:31:02.848514 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-10-09 10:31:02.848520 | orchestrator | Thursday 09 October 2025 10:23:25 +0000 (0:00:00.326) 0:04:10.610 ****** 2025-10-09 10:31:02.848527 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848533 | orchestrator | 2025-10-09 10:31:02.848539 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-10-09 10:31:02.848545 | orchestrator | Thursday 09 October 2025 10:23:25 +0000 (0:00:00.212) 0:04:10.823 ****** 2025-10-09 10:31:02.848551 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848557 | orchestrator | 2025-10-09 10:31:02.848564 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-10-09 10:31:02.848570 | orchestrator | Thursday 09 October 2025 10:23:25 +0000 (0:00:00.409) 0:04:11.232 ****** 2025-10-09 10:31:02.848576 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848586 | orchestrator | 2025-10-09 10:31:02.848592 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-10-09 10:31:02.848598 | orchestrator | Thursday 09 October 2025 10:23:26 +0000 (0:00:00.326) 0:04:11.559 ****** 2025-10-09 10:31:02.848605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.848611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.848617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.848623 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848629 | orchestrator | 2025-10-09 10:31:02.848635 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-10-09 10:31:02.848642 | orchestrator | Thursday 09 October 2025 10:23:27 +0000 (0:00:01.105) 0:04:12.664 ****** 2025-10-09 10:31:02.848648 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848654 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.848660 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.848666 | orchestrator | 2025-10-09 10:31:02.848672 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-10-09 10:31:02.848678 | orchestrator | Thursday 09 October 2025 10:23:28 +0000 (0:00:00.669) 0:04:13.334 ****** 2025-10-09 10:31:02.848685 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848691 | orchestrator | 2025-10-09 10:31:02.848697 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-10-09 10:31:02.848703 | orchestrator | Thursday 09 October 2025 10:23:28 +0000 (0:00:00.260) 0:04:13.594 ****** 2025-10-09 10:31:02.848709 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848715 | orchestrator | 2025-10-09 10:31:02.848722 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-10-09 10:31:02.848728 | orchestrator | Thursday 09 October 2025 10:23:28 +0000 (0:00:00.253) 0:04:13.847 ****** 2025-10-09 10:31:02.848734 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.848740 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.848746 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.848752 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.848759 | orchestrator | 2025-10-09 10:31:02.848765 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-10-09 10:31:02.848771 | orchestrator | Thursday 09 October 2025 10:23:29 +0000 (0:00:01.228) 0:04:15.076 ****** 2025-10-09 10:31:02.848777 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.848783 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.848790 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.848796 | orchestrator | 2025-10-09 10:31:02.848802 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-10-09 10:31:02.848808 | orchestrator | Thursday 09 October 2025 10:23:30 +0000 (0:00:00.618) 0:04:15.694 ****** 2025-10-09 10:31:02.848814 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.848821 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.848830 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.848836 | orchestrator | 2025-10-09 10:31:02.848843 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-10-09 10:31:02.848849 | orchestrator | Thursday 09 October 2025 10:23:31 +0000 (0:00:01.462) 0:04:17.157 ****** 2025-10-09 10:31:02.848855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.848864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.848870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.848877 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.848883 | orchestrator | 2025-10-09 10:31:02.848889 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-10-09 10:31:02.848895 | orchestrator | Thursday 09 October 2025 10:23:33 +0000 (0:00:01.131) 0:04:18.289 ****** 2025-10-09 10:31:02.848901 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.848907 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.848917 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.848923 | orchestrator | 2025-10-09 10:31:02.848929 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-10-09 10:31:02.848936 | orchestrator | Thursday 09 October 2025 10:23:33 +0000 (0:00:00.861) 0:04:19.150 ****** 2025-10-09 10:31:02.848942 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.848948 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.848954 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.848960 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.848967 | orchestrator | 2025-10-09 10:31:02.848973 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-10-09 10:31:02.848979 | orchestrator | Thursday 09 October 2025 10:23:36 +0000 (0:00:02.721) 0:04:21.872 ****** 2025-10-09 10:31:02.848985 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.848991 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.848997 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.849004 | orchestrator | 2025-10-09 10:31:02.849010 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-10-09 10:31:02.849016 | orchestrator | Thursday 09 October 2025 10:23:37 +0000 (0:00:00.589) 0:04:22.461 ****** 2025-10-09 10:31:02.849022 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.849028 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.849034 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.849041 | orchestrator | 2025-10-09 10:31:02.849047 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-10-09 10:31:02.849053 | orchestrator | Thursday 09 October 2025 10:23:39 +0000 (0:00:02.010) 0:04:24.472 ****** 2025-10-09 10:31:02.849059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.849065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.849071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.849077 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.849083 | orchestrator | 2025-10-09 10:31:02.849090 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-10-09 10:31:02.849096 | orchestrator | Thursday 09 October 2025 10:23:40 +0000 (0:00:00.881) 0:04:25.354 ****** 2025-10-09 10:31:02.849102 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.849108 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.849114 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.849120 | orchestrator | 2025-10-09 10:31:02.849127 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-10-09 10:31:02.849133 | orchestrator | Thursday 09 October 2025 10:23:40 +0000 (0:00:00.553) 0:04:25.908 ****** 2025-10-09 10:31:02.849139 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849145 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849151 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849157 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.849164 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.849170 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.849176 | orchestrator | 2025-10-09 10:31:02.849182 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-10-09 10:31:02.849188 | orchestrator | Thursday 09 October 2025 10:23:41 +0000 (0:00:01.065) 0:04:26.973 ****** 2025-10-09 10:31:02.849194 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.849201 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.849207 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.849213 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.849248 | orchestrator | 2025-10-09 10:31:02.849255 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-10-09 10:31:02.849261 | orchestrator | Thursday 09 October 2025 10:23:43 +0000 (0:00:01.526) 0:04:28.499 ****** 2025-10-09 10:31:02.849273 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.849280 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.849286 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.849292 | orchestrator | 2025-10-09 10:31:02.849298 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-10-09 10:31:02.849304 | orchestrator | Thursday 09 October 2025 10:23:43 +0000 (0:00:00.385) 0:04:28.885 ****** 2025-10-09 10:31:02.849310 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.849317 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.849323 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.849329 | orchestrator | 2025-10-09 10:31:02.849335 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-10-09 10:31:02.849341 | orchestrator | Thursday 09 October 2025 10:23:45 +0000 (0:00:01.658) 0:04:30.544 ****** 2025-10-09 10:31:02.849348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:31:02.849354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:31:02.849360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:31:02.849366 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849372 | orchestrator | 2025-10-09 10:31:02.849378 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-10-09 10:31:02.849387 | orchestrator | Thursday 09 October 2025 10:23:46 +0000 (0:00:00.753) 0:04:31.297 ****** 2025-10-09 10:31:02.849394 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.849400 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.849406 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.849412 | orchestrator | 2025-10-09 10:31:02.849418 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-10-09 10:31:02.849424 | orchestrator | 2025-10-09 10:31:02.849435 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:31:02.849441 | orchestrator | Thursday 09 October 2025 10:23:46 +0000 (0:00:00.581) 0:04:31.878 ****** 2025-10-09 10:31:02.849447 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.849454 | orchestrator | 2025-10-09 10:31:02.849460 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:31:02.849466 | orchestrator | Thursday 09 October 2025 10:23:47 +0000 (0:00:00.774) 0:04:32.653 ****** 2025-10-09 10:31:02.849472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.849478 | orchestrator | 2025-10-09 10:31:02.849484 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:31:02.849490 | orchestrator | Thursday 09 October 2025 10:23:47 +0000 (0:00:00.554) 0:04:33.207 ****** 2025-10-09 10:31:02.849496 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.849503 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.849509 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.849515 | orchestrator | 2025-10-09 10:31:02.849521 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:31:02.849526 | orchestrator | Thursday 09 October 2025 10:23:48 +0000 (0:00:00.860) 0:04:34.068 ****** 2025-10-09 10:31:02.849532 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849537 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849542 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849548 | orchestrator | 2025-10-09 10:31:02.849553 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:31:02.849559 | orchestrator | Thursday 09 October 2025 10:23:49 +0000 (0:00:00.677) 0:04:34.746 ****** 2025-10-09 10:31:02.849564 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849569 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849575 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849580 | orchestrator | 2025-10-09 10:31:02.849586 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:31:02.849595 | orchestrator | Thursday 09 October 2025 10:23:49 +0000 (0:00:00.386) 0:04:35.132 ****** 2025-10-09 10:31:02.849600 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849606 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849611 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849616 | orchestrator | 2025-10-09 10:31:02.849622 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:31:02.849627 | orchestrator | Thursday 09 October 2025 10:23:50 +0000 (0:00:00.416) 0:04:35.549 ****** 2025-10-09 10:31:02.849633 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.849638 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.849643 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.849649 | orchestrator | 2025-10-09 10:31:02.849654 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:31:02.849660 | orchestrator | Thursday 09 October 2025 10:23:51 +0000 (0:00:00.933) 0:04:36.482 ****** 2025-10-09 10:31:02.849665 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849670 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849676 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849681 | orchestrator | 2025-10-09 10:31:02.849686 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:31:02.849692 | orchestrator | Thursday 09 October 2025 10:23:52 +0000 (0:00:00.823) 0:04:37.306 ****** 2025-10-09 10:31:02.849697 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849703 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849708 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849713 | orchestrator | 2025-10-09 10:31:02.849718 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:31:02.849724 | orchestrator | Thursday 09 October 2025 10:23:52 +0000 (0:00:00.548) 0:04:37.855 ****** 2025-10-09 10:31:02.849729 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.849734 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.849740 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.849745 | orchestrator | 2025-10-09 10:31:02.849751 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:31:02.849756 | orchestrator | Thursday 09 October 2025 10:23:53 +0000 (0:00:00.981) 0:04:38.836 ****** 2025-10-09 10:31:02.849761 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.849767 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.849772 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.849778 | orchestrator | 2025-10-09 10:31:02.849783 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:31:02.849788 | orchestrator | Thursday 09 October 2025 10:23:54 +0000 (0:00:00.804) 0:04:39.641 ****** 2025-10-09 10:31:02.849794 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849799 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849804 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849810 | orchestrator | 2025-10-09 10:31:02.849815 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:31:02.849820 | orchestrator | Thursday 09 October 2025 10:23:54 +0000 (0:00:00.628) 0:04:40.270 ****** 2025-10-09 10:31:02.849826 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.849831 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.849836 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.849842 | orchestrator | 2025-10-09 10:31:02.849847 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:31:02.849853 | orchestrator | Thursday 09 October 2025 10:23:55 +0000 (0:00:00.423) 0:04:40.693 ****** 2025-10-09 10:31:02.849858 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849863 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849869 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849874 | orchestrator | 2025-10-09 10:31:02.849882 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:31:02.849888 | orchestrator | Thursday 09 October 2025 10:23:55 +0000 (0:00:00.353) 0:04:41.047 ****** 2025-10-09 10:31:02.849897 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849902 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849907 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849913 | orchestrator | 2025-10-09 10:31:02.849921 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:31:02.849927 | orchestrator | Thursday 09 October 2025 10:23:56 +0000 (0:00:00.377) 0:04:41.425 ****** 2025-10-09 10:31:02.849933 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849938 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849943 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849948 | orchestrator | 2025-10-09 10:31:02.849954 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:31:02.849959 | orchestrator | Thursday 09 October 2025 10:23:56 +0000 (0:00:00.308) 0:04:41.734 ****** 2025-10-09 10:31:02.849964 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.849970 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.849975 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.849981 | orchestrator | 2025-10-09 10:31:02.849986 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:31:02.849991 | orchestrator | Thursday 09 October 2025 10:23:57 +0000 (0:00:00.586) 0:04:42.321 ****** 2025-10-09 10:31:02.849996 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.850002 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.850007 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.850061 | orchestrator | 2025-10-09 10:31:02.850069 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:31:02.850074 | orchestrator | Thursday 09 October 2025 10:23:57 +0000 (0:00:00.321) 0:04:42.642 ****** 2025-10-09 10:31:02.850080 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850085 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850091 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850096 | orchestrator | 2025-10-09 10:31:02.850102 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:31:02.850107 | orchestrator | Thursday 09 October 2025 10:23:57 +0000 (0:00:00.458) 0:04:43.101 ****** 2025-10-09 10:31:02.850113 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850118 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850123 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850129 | orchestrator | 2025-10-09 10:31:02.850134 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:31:02.850139 | orchestrator | Thursday 09 October 2025 10:23:58 +0000 (0:00:00.351) 0:04:43.452 ****** 2025-10-09 10:31:02.850145 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850150 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850155 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850161 | orchestrator | 2025-10-09 10:31:02.850166 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-10-09 10:31:02.850171 | orchestrator | Thursday 09 October 2025 10:23:59 +0000 (0:00:00.906) 0:04:44.359 ****** 2025-10-09 10:31:02.850177 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850182 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850187 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850193 | orchestrator | 2025-10-09 10:31:02.850198 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-10-09 10:31:02.850203 | orchestrator | Thursday 09 October 2025 10:23:59 +0000 (0:00:00.338) 0:04:44.697 ****** 2025-10-09 10:31:02.850209 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.850214 | orchestrator | 2025-10-09 10:31:02.850232 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-10-09 10:31:02.850237 | orchestrator | Thursday 09 October 2025 10:24:00 +0000 (0:00:00.842) 0:04:45.540 ****** 2025-10-09 10:31:02.850243 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.850248 | orchestrator | 2025-10-09 10:31:02.850254 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-10-09 10:31:02.850264 | orchestrator | Thursday 09 October 2025 10:24:00 +0000 (0:00:00.178) 0:04:45.718 ****** 2025-10-09 10:31:02.850269 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-10-09 10:31:02.850275 | orchestrator | 2025-10-09 10:31:02.850280 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-10-09 10:31:02.850286 | orchestrator | Thursday 09 October 2025 10:24:01 +0000 (0:00:01.116) 0:04:46.835 ****** 2025-10-09 10:31:02.850291 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850297 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850302 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850308 | orchestrator | 2025-10-09 10:31:02.850313 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-10-09 10:31:02.850319 | orchestrator | Thursday 09 October 2025 10:24:01 +0000 (0:00:00.372) 0:04:47.208 ****** 2025-10-09 10:31:02.850324 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850330 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850335 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850340 | orchestrator | 2025-10-09 10:31:02.850346 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-10-09 10:31:02.850351 | orchestrator | Thursday 09 October 2025 10:24:02 +0000 (0:00:00.346) 0:04:47.554 ****** 2025-10-09 10:31:02.850357 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.850362 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.850368 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.850373 | orchestrator | 2025-10-09 10:31:02.850378 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-10-09 10:31:02.850384 | orchestrator | Thursday 09 October 2025 10:24:03 +0000 (0:00:01.220) 0:04:48.775 ****** 2025-10-09 10:31:02.850389 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.850395 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.850400 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.850406 | orchestrator | 2025-10-09 10:31:02.850411 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-10-09 10:31:02.850417 | orchestrator | Thursday 09 October 2025 10:24:04 +0000 (0:00:01.159) 0:04:49.934 ****** 2025-10-09 10:31:02.850422 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.850431 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.850437 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.850442 | orchestrator | 2025-10-09 10:31:02.850448 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-10-09 10:31:02.850454 | orchestrator | Thursday 09 October 2025 10:24:05 +0000 (0:00:00.766) 0:04:50.701 ****** 2025-10-09 10:31:02.850459 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850483 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850490 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850495 | orchestrator | 2025-10-09 10:31:02.850500 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-10-09 10:31:02.850506 | orchestrator | Thursday 09 October 2025 10:24:06 +0000 (0:00:00.718) 0:04:51.419 ****** 2025-10-09 10:31:02.850511 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.850517 | orchestrator | 2025-10-09 10:31:02.850522 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-10-09 10:31:02.850528 | orchestrator | Thursday 09 October 2025 10:24:07 +0000 (0:00:01.300) 0:04:52.719 ****** 2025-10-09 10:31:02.850533 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850539 | orchestrator | 2025-10-09 10:31:02.850544 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-10-09 10:31:02.850549 | orchestrator | Thursday 09 October 2025 10:24:08 +0000 (0:00:00.631) 0:04:53.351 ****** 2025-10-09 10:31:02.850555 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:31:02.850560 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.850566 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.850575 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:31:02.850581 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-10-09 10:31:02.850586 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:31:02.850592 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:31:02.850597 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-10-09 10:31:02.850603 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:31:02.850608 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-10-09 10:31:02.850613 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-10-09 10:31:02.850618 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-10-09 10:31:02.850624 | orchestrator | 2025-10-09 10:31:02.850629 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-10-09 10:31:02.850635 | orchestrator | Thursday 09 October 2025 10:24:11 +0000 (0:00:03.907) 0:04:57.259 ****** 2025-10-09 10:31:02.850640 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.850645 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.850651 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.850656 | orchestrator | 2025-10-09 10:31:02.850661 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-10-09 10:31:02.850667 | orchestrator | Thursday 09 October 2025 10:24:13 +0000 (0:00:01.280) 0:04:58.540 ****** 2025-10-09 10:31:02.850672 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850677 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850683 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850688 | orchestrator | 2025-10-09 10:31:02.850694 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-10-09 10:31:02.850699 | orchestrator | Thursday 09 October 2025 10:24:13 +0000 (0:00:00.337) 0:04:58.877 ****** 2025-10-09 10:31:02.850705 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.850710 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.850715 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.850721 | orchestrator | 2025-10-09 10:31:02.850726 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-10-09 10:31:02.850731 | orchestrator | Thursday 09 October 2025 10:24:13 +0000 (0:00:00.373) 0:04:59.250 ****** 2025-10-09 10:31:02.850737 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.850742 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.850747 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.850753 | orchestrator | 2025-10-09 10:31:02.850758 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-10-09 10:31:02.850764 | orchestrator | Thursday 09 October 2025 10:24:16 +0000 (0:00:02.668) 0:05:01.918 ****** 2025-10-09 10:31:02.850769 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.850774 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.850780 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.850785 | orchestrator | 2025-10-09 10:31:02.850790 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-10-09 10:31:02.850796 | orchestrator | Thursday 09 October 2025 10:24:17 +0000 (0:00:01.317) 0:05:03.236 ****** 2025-10-09 10:31:02.850801 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.850807 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.850812 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.850817 | orchestrator | 2025-10-09 10:31:02.850822 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-10-09 10:31:02.850828 | orchestrator | Thursday 09 October 2025 10:24:18 +0000 (0:00:00.380) 0:05:03.616 ****** 2025-10-09 10:31:02.850833 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.850839 | orchestrator | 2025-10-09 10:31:02.850844 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-10-09 10:31:02.850849 | orchestrator | Thursday 09 October 2025 10:24:18 +0000 (0:00:00.541) 0:05:04.157 ****** 2025-10-09 10:31:02.850859 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.850864 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.850870 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.850875 | orchestrator | 2025-10-09 10:31:02.850880 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-10-09 10:31:02.850886 | orchestrator | Thursday 09 October 2025 10:24:19 +0000 (0:00:00.604) 0:05:04.761 ****** 2025-10-09 10:31:02.850891 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.850896 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.850902 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.850907 | orchestrator | 2025-10-09 10:31:02.850916 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-10-09 10:31:02.850921 | orchestrator | Thursday 09 October 2025 10:24:19 +0000 (0:00:00.330) 0:05:05.092 ****** 2025-10-09 10:31:02.850942 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.850948 | orchestrator | 2025-10-09 10:31:02.850954 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-10-09 10:31:02.850959 | orchestrator | Thursday 09 October 2025 10:24:20 +0000 (0:00:00.592) 0:05:05.684 ****** 2025-10-09 10:31:02.850964 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.850970 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.850975 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.850980 | orchestrator | 2025-10-09 10:31:02.850986 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-10-09 10:31:02.850991 | orchestrator | Thursday 09 October 2025 10:24:22 +0000 (0:00:02.113) 0:05:07.798 ****** 2025-10-09 10:31:02.850996 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.851002 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.851007 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.851013 | orchestrator | 2025-10-09 10:31:02.851018 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-10-09 10:31:02.851023 | orchestrator | Thursday 09 October 2025 10:24:23 +0000 (0:00:01.331) 0:05:09.129 ****** 2025-10-09 10:31:02.851029 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.851034 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.851039 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.851045 | orchestrator | 2025-10-09 10:31:02.851050 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-10-09 10:31:02.851055 | orchestrator | Thursday 09 October 2025 10:24:25 +0000 (0:00:01.703) 0:05:10.832 ****** 2025-10-09 10:31:02.851061 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.851066 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.851072 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.851077 | orchestrator | 2025-10-09 10:31:02.851082 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-10-09 10:31:02.851088 | orchestrator | Thursday 09 October 2025 10:24:27 +0000 (0:00:01.959) 0:05:12.792 ****** 2025-10-09 10:31:02.851093 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.851099 | orchestrator | 2025-10-09 10:31:02.851104 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-10-09 10:31:02.851109 | orchestrator | Thursday 09 October 2025 10:24:28 +0000 (0:00:00.880) 0:05:13.672 ****** 2025-10-09 10:31:02.851115 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851120 | orchestrator | 2025-10-09 10:31:02.851125 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-10-09 10:31:02.851131 | orchestrator | Thursday 09 October 2025 10:24:29 +0000 (0:00:01.280) 0:05:14.953 ****** 2025-10-09 10:31:02.851136 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.851142 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.851147 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851152 | orchestrator | 2025-10-09 10:31:02.851158 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-10-09 10:31:02.851168 | orchestrator | Thursday 09 October 2025 10:24:38 +0000 (0:00:09.143) 0:05:24.097 ****** 2025-10-09 10:31:02.851174 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851179 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851185 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851190 | orchestrator | 2025-10-09 10:31:02.851195 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-10-09 10:31:02.851201 | orchestrator | Thursday 09 October 2025 10:24:39 +0000 (0:00:00.410) 0:05:24.507 ****** 2025-10-09 10:31:02.851208 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37e5cc2a377fadf47fe35cf43bce0e754fc6cb99'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-10-09 10:31:02.851214 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37e5cc2a377fadf47fe35cf43bce0e754fc6cb99'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-10-09 10:31:02.851233 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37e5cc2a377fadf47fe35cf43bce0e754fc6cb99'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-10-09 10:31:02.851240 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37e5cc2a377fadf47fe35cf43bce0e754fc6cb99'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-10-09 10:31:02.851266 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37e5cc2a377fadf47fe35cf43bce0e754fc6cb99'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-10-09 10:31:02.851273 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37e5cc2a377fadf47fe35cf43bce0e754fc6cb99'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__37e5cc2a377fadf47fe35cf43bce0e754fc6cb99'}])  2025-10-09 10:31:02.851280 | orchestrator | 2025-10-09 10:31:02.851286 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:31:02.851291 | orchestrator | Thursday 09 October 2025 10:24:55 +0000 (0:00:15.861) 0:05:40.369 ****** 2025-10-09 10:31:02.851297 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851302 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851308 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851313 | orchestrator | 2025-10-09 10:31:02.851319 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-10-09 10:31:02.851324 | orchestrator | Thursday 09 October 2025 10:24:55 +0000 (0:00:00.451) 0:05:40.820 ****** 2025-10-09 10:31:02.851329 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.851335 | orchestrator | 2025-10-09 10:31:02.851340 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-10-09 10:31:02.851350 | orchestrator | Thursday 09 October 2025 10:24:56 +0000 (0:00:00.847) 0:05:41.668 ****** 2025-10-09 10:31:02.851356 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851361 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.851366 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.851372 | orchestrator | 2025-10-09 10:31:02.851377 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-10-09 10:31:02.851383 | orchestrator | Thursday 09 October 2025 10:24:56 +0000 (0:00:00.396) 0:05:42.065 ****** 2025-10-09 10:31:02.851388 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851393 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851399 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851404 | orchestrator | 2025-10-09 10:31:02.851410 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-10-09 10:31:02.851415 | orchestrator | Thursday 09 October 2025 10:24:57 +0000 (0:00:00.373) 0:05:42.438 ****** 2025-10-09 10:31:02.851420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:31:02.851426 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:31:02.851431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:31:02.851437 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851442 | orchestrator | 2025-10-09 10:31:02.851447 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-10-09 10:31:02.851453 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:00.973) 0:05:43.412 ****** 2025-10-09 10:31:02.851458 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851464 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.851469 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.851474 | orchestrator | 2025-10-09 10:31:02.851480 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-10-09 10:31:02.851485 | orchestrator | 2025-10-09 10:31:02.851491 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:31:02.851496 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:00.866) 0:05:44.278 ****** 2025-10-09 10:31:02.851501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.851507 | orchestrator | 2025-10-09 10:31:02.851512 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:31:02.851518 | orchestrator | Thursday 09 October 2025 10:24:59 +0000 (0:00:00.543) 0:05:44.822 ****** 2025-10-09 10:31:02.851523 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.851529 | orchestrator | 2025-10-09 10:31:02.851534 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:31:02.851540 | orchestrator | Thursday 09 October 2025 10:25:00 +0000 (0:00:00.834) 0:05:45.656 ****** 2025-10-09 10:31:02.851545 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851550 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.851556 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.851561 | orchestrator | 2025-10-09 10:31:02.851567 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:31:02.851572 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:00.799) 0:05:46.456 ****** 2025-10-09 10:31:02.851578 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851583 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851588 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851594 | orchestrator | 2025-10-09 10:31:02.851599 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:31:02.851604 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:00.344) 0:05:46.801 ****** 2025-10-09 10:31:02.851610 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851615 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851621 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851631 | orchestrator | 2025-10-09 10:31:02.851639 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:31:02.851645 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:00.354) 0:05:47.155 ****** 2025-10-09 10:31:02.851650 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851656 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851661 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851667 | orchestrator | 2025-10-09 10:31:02.851688 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:31:02.851694 | orchestrator | Thursday 09 October 2025 10:25:02 +0000 (0:00:00.638) 0:05:47.794 ****** 2025-10-09 10:31:02.851699 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851705 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.851710 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.851716 | orchestrator | 2025-10-09 10:31:02.851721 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:31:02.851726 | orchestrator | Thursday 09 October 2025 10:25:03 +0000 (0:00:00.806) 0:05:48.600 ****** 2025-10-09 10:31:02.851732 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851737 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851743 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851748 | orchestrator | 2025-10-09 10:31:02.851753 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:31:02.851759 | orchestrator | Thursday 09 October 2025 10:25:03 +0000 (0:00:00.359) 0:05:48.959 ****** 2025-10-09 10:31:02.851764 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851770 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851775 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851780 | orchestrator | 2025-10-09 10:31:02.851786 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:31:02.851791 | orchestrator | Thursday 09 October 2025 10:25:04 +0000 (0:00:00.399) 0:05:49.358 ****** 2025-10-09 10:31:02.851796 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851802 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.851807 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.851813 | orchestrator | 2025-10-09 10:31:02.851818 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:31:02.851823 | orchestrator | Thursday 09 October 2025 10:25:04 +0000 (0:00:00.777) 0:05:50.135 ****** 2025-10-09 10:31:02.851829 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851834 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.851839 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.851845 | orchestrator | 2025-10-09 10:31:02.851850 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:31:02.851855 | orchestrator | Thursday 09 October 2025 10:25:06 +0000 (0:00:01.457) 0:05:51.592 ****** 2025-10-09 10:31:02.851861 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851866 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851871 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851877 | orchestrator | 2025-10-09 10:31:02.851882 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:31:02.851888 | orchestrator | Thursday 09 October 2025 10:25:06 +0000 (0:00:00.412) 0:05:52.005 ****** 2025-10-09 10:31:02.851893 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.851898 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.851904 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.851909 | orchestrator | 2025-10-09 10:31:02.851914 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:31:02.851920 | orchestrator | Thursday 09 October 2025 10:25:07 +0000 (0:00:00.403) 0:05:52.411 ****** 2025-10-09 10:31:02.851925 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851931 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851936 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851941 | orchestrator | 2025-10-09 10:31:02.851946 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:31:02.851958 | orchestrator | Thursday 09 October 2025 10:25:07 +0000 (0:00:00.374) 0:05:52.785 ****** 2025-10-09 10:31:02.851963 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.851969 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.851974 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.851980 | orchestrator | 2025-10-09 10:31:02.851985 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:31:02.851990 | orchestrator | Thursday 09 October 2025 10:25:08 +0000 (0:00:00.535) 0:05:53.321 ****** 2025-10-09 10:31:02.851996 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852001 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852007 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.852012 | orchestrator | 2025-10-09 10:31:02.852017 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:31:02.852023 | orchestrator | Thursday 09 October 2025 10:25:08 +0000 (0:00:00.280) 0:05:53.602 ****** 2025-10-09 10:31:02.852028 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852034 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852039 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.852044 | orchestrator | 2025-10-09 10:31:02.852050 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:31:02.852055 | orchestrator | Thursday 09 October 2025 10:25:08 +0000 (0:00:00.295) 0:05:53.897 ****** 2025-10-09 10:31:02.852061 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852066 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852071 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.852077 | orchestrator | 2025-10-09 10:31:02.852082 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:31:02.852088 | orchestrator | Thursday 09 October 2025 10:25:08 +0000 (0:00:00.335) 0:05:54.233 ****** 2025-10-09 10:31:02.852093 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.852098 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.852104 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.852109 | orchestrator | 2025-10-09 10:31:02.852115 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:31:02.852120 | orchestrator | Thursday 09 October 2025 10:25:09 +0000 (0:00:00.482) 0:05:54.715 ****** 2025-10-09 10:31:02.852125 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.852131 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.852136 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.852141 | orchestrator | 2025-10-09 10:31:02.852147 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:31:02.852155 | orchestrator | Thursday 09 October 2025 10:25:09 +0000 (0:00:00.317) 0:05:55.033 ****** 2025-10-09 10:31:02.852161 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.852166 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.852171 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.852177 | orchestrator | 2025-10-09 10:31:02.852182 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-10-09 10:31:02.852203 | orchestrator | Thursday 09 October 2025 10:25:10 +0000 (0:00:00.559) 0:05:55.593 ****** 2025-10-09 10:31:02.852209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:31:02.852215 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:31:02.852233 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:31:02.852239 | orchestrator | 2025-10-09 10:31:02.852244 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-10-09 10:31:02.852250 | orchestrator | Thursday 09 October 2025 10:25:11 +0000 (0:00:00.786) 0:05:56.379 ****** 2025-10-09 10:31:02.852255 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.852261 | orchestrator | 2025-10-09 10:31:02.852266 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-10-09 10:31:02.852275 | orchestrator | Thursday 09 October 2025 10:25:11 +0000 (0:00:00.661) 0:05:57.040 ****** 2025-10-09 10:31:02.852281 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.852286 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.852292 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.852297 | orchestrator | 2025-10-09 10:31:02.852303 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-10-09 10:31:02.852308 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:00.709) 0:05:57.750 ****** 2025-10-09 10:31:02.852314 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852319 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852324 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.852330 | orchestrator | 2025-10-09 10:31:02.852335 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-10-09 10:31:02.852341 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:00.325) 0:05:58.076 ****** 2025-10-09 10:31:02.852346 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:31:02.852352 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:31:02.852357 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:31:02.852363 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-10-09 10:31:02.852368 | orchestrator | 2025-10-09 10:31:02.852373 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-10-09 10:31:02.852379 | orchestrator | Thursday 09 October 2025 10:25:23 +0000 (0:00:10.535) 0:06:08.611 ****** 2025-10-09 10:31:02.852384 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.852390 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.852395 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.852401 | orchestrator | 2025-10-09 10:31:02.852406 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-10-09 10:31:02.852412 | orchestrator | Thursday 09 October 2025 10:25:23 +0000 (0:00:00.497) 0:06:09.108 ****** 2025-10-09 10:31:02.852417 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-09 10:31:02.852423 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-09 10:31:02.852428 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-09 10:31:02.852433 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-10-09 10:31:02.852439 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.852444 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.852450 | orchestrator | 2025-10-09 10:31:02.852455 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:31:02.852461 | orchestrator | Thursday 09 October 2025 10:25:26 +0000 (0:00:02.186) 0:06:11.294 ****** 2025-10-09 10:31:02.852466 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-09 10:31:02.852472 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-09 10:31:02.852477 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-09 10:31:02.852482 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:31:02.852488 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-10-09 10:31:02.852493 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-10-09 10:31:02.852499 | orchestrator | 2025-10-09 10:31:02.852504 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-10-09 10:31:02.852510 | orchestrator | Thursday 09 October 2025 10:25:27 +0000 (0:00:01.170) 0:06:12.465 ****** 2025-10-09 10:31:02.852515 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.852520 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.852526 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.852531 | orchestrator | 2025-10-09 10:31:02.852537 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-10-09 10:31:02.852542 | orchestrator | Thursday 09 October 2025 10:25:27 +0000 (0:00:00.698) 0:06:13.164 ****** 2025-10-09 10:31:02.852548 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852557 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852562 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.852568 | orchestrator | 2025-10-09 10:31:02.852573 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-10-09 10:31:02.852579 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:00.495) 0:06:13.659 ****** 2025-10-09 10:31:02.852584 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852589 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852595 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.852600 | orchestrator | 2025-10-09 10:31:02.852606 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-10-09 10:31:02.852611 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:00.260) 0:06:13.920 ****** 2025-10-09 10:31:02.852619 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.852625 | orchestrator | 2025-10-09 10:31:02.852630 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-10-09 10:31:02.852636 | orchestrator | Thursday 09 October 2025 10:25:29 +0000 (0:00:00.537) 0:06:14.458 ****** 2025-10-09 10:31:02.852641 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852663 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852669 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.852674 | orchestrator | 2025-10-09 10:31:02.852680 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-10-09 10:31:02.852685 | orchestrator | Thursday 09 October 2025 10:25:29 +0000 (0:00:00.625) 0:06:15.083 ****** 2025-10-09 10:31:02.852691 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852696 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852701 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.852707 | orchestrator | 2025-10-09 10:31:02.852712 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-10-09 10:31:02.852717 | orchestrator | Thursday 09 October 2025 10:25:30 +0000 (0:00:00.444) 0:06:15.527 ****** 2025-10-09 10:31:02.852723 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.852728 | orchestrator | 2025-10-09 10:31:02.852734 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-10-09 10:31:02.852739 | orchestrator | Thursday 09 October 2025 10:25:30 +0000 (0:00:00.573) 0:06:16.100 ****** 2025-10-09 10:31:02.852744 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.852750 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.852755 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.852761 | orchestrator | 2025-10-09 10:31:02.852766 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-10-09 10:31:02.852771 | orchestrator | Thursday 09 October 2025 10:25:32 +0000 (0:00:01.492) 0:06:17.593 ****** 2025-10-09 10:31:02.852777 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.852782 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.852788 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.852793 | orchestrator | 2025-10-09 10:31:02.852798 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-10-09 10:31:02.852804 | orchestrator | Thursday 09 October 2025 10:25:33 +0000 (0:00:01.136) 0:06:18.729 ****** 2025-10-09 10:31:02.852809 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.852815 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.852820 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.852825 | orchestrator | 2025-10-09 10:31:02.852830 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-10-09 10:31:02.852836 | orchestrator | Thursday 09 October 2025 10:25:35 +0000 (0:00:01.835) 0:06:20.565 ****** 2025-10-09 10:31:02.852841 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.852847 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.852852 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.852857 | orchestrator | 2025-10-09 10:31:02.852867 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-10-09 10:31:02.852873 | orchestrator | Thursday 09 October 2025 10:25:37 +0000 (0:00:02.075) 0:06:22.640 ****** 2025-10-09 10:31:02.852878 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.852884 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.852889 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-10-09 10:31:02.852895 | orchestrator | 2025-10-09 10:31:02.852900 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-10-09 10:31:02.852905 | orchestrator | Thursday 09 October 2025 10:25:38 +0000 (0:00:00.712) 0:06:23.353 ****** 2025-10-09 10:31:02.852911 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-10-09 10:31:02.852916 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-10-09 10:31:02.852922 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-10-09 10:31:02.852927 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-10-09 10:31:02.852933 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-10-09 10:31:02.852938 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:31:02.852943 | orchestrator | 2025-10-09 10:31:02.852949 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-10-09 10:31:02.852954 | orchestrator | Thursday 09 October 2025 10:26:08 +0000 (0:00:30.283) 0:06:53.637 ****** 2025-10-09 10:31:02.852959 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:31:02.852965 | orchestrator | 2025-10-09 10:31:02.852970 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-10-09 10:31:02.852975 | orchestrator | Thursday 09 October 2025 10:26:09 +0000 (0:00:01.322) 0:06:54.959 ****** 2025-10-09 10:31:02.852980 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.852986 | orchestrator | 2025-10-09 10:31:02.852991 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-10-09 10:31:02.852997 | orchestrator | Thursday 09 October 2025 10:26:10 +0000 (0:00:00.372) 0:06:55.331 ****** 2025-10-09 10:31:02.853002 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.853007 | orchestrator | 2025-10-09 10:31:02.853013 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-10-09 10:31:02.853018 | orchestrator | Thursday 09 October 2025 10:26:10 +0000 (0:00:00.164) 0:06:55.496 ****** 2025-10-09 10:31:02.853023 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-10-09 10:31:02.853029 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-10-09 10:31:02.853037 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-10-09 10:31:02.853042 | orchestrator | 2025-10-09 10:31:02.853048 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-10-09 10:31:02.853053 | orchestrator | Thursday 09 October 2025 10:26:16 +0000 (0:00:06.481) 0:07:01.978 ****** 2025-10-09 10:31:02.853058 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-10-09 10:31:02.853080 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-10-09 10:31:02.853086 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-10-09 10:31:02.853091 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-10-09 10:31:02.853097 | orchestrator | 2025-10-09 10:31:02.853102 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:31:02.853108 | orchestrator | Thursday 09 October 2025 10:26:21 +0000 (0:00:05.124) 0:07:07.102 ****** 2025-10-09 10:31:02.853113 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.853119 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.853128 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.853133 | orchestrator | 2025-10-09 10:31:02.853139 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-10-09 10:31:02.853144 | orchestrator | Thursday 09 October 2025 10:26:22 +0000 (0:00:00.749) 0:07:07.852 ****** 2025-10-09 10:31:02.853150 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:31:02.853155 | orchestrator | 2025-10-09 10:31:02.853160 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-10-09 10:31:02.853166 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:00.558) 0:07:08.410 ****** 2025-10-09 10:31:02.853171 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.853176 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.853182 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.853187 | orchestrator | 2025-10-09 10:31:02.853192 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-10-09 10:31:02.853198 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:00.327) 0:07:08.738 ****** 2025-10-09 10:31:02.853203 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.853208 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.853214 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.853248 | orchestrator | 2025-10-09 10:31:02.853254 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-10-09 10:31:02.853259 | orchestrator | Thursday 09 October 2025 10:26:25 +0000 (0:00:01.565) 0:07:10.303 ****** 2025-10-09 10:31:02.853265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:31:02.853270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:31:02.853276 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:31:02.853281 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.853287 | orchestrator | 2025-10-09 10:31:02.853292 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-10-09 10:31:02.853298 | orchestrator | Thursday 09 October 2025 10:26:25 +0000 (0:00:00.753) 0:07:11.057 ****** 2025-10-09 10:31:02.853303 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.853309 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.853314 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.853320 | orchestrator | 2025-10-09 10:31:02.853325 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-10-09 10:31:02.853331 | orchestrator | 2025-10-09 10:31:02.853336 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:31:02.853342 | orchestrator | Thursday 09 October 2025 10:26:26 +0000 (0:00:00.596) 0:07:11.653 ****** 2025-10-09 10:31:02.853347 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.853353 | orchestrator | 2025-10-09 10:31:02.853358 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:31:02.853364 | orchestrator | Thursday 09 October 2025 10:26:27 +0000 (0:00:00.821) 0:07:12.474 ****** 2025-10-09 10:31:02.853369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.853375 | orchestrator | 2025-10-09 10:31:02.853380 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:31:02.853385 | orchestrator | Thursday 09 October 2025 10:26:27 +0000 (0:00:00.589) 0:07:13.064 ****** 2025-10-09 10:31:02.853391 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853396 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853402 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853407 | orchestrator | 2025-10-09 10:31:02.853412 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:31:02.853418 | orchestrator | Thursday 09 October 2025 10:26:28 +0000 (0:00:00.573) 0:07:13.637 ****** 2025-10-09 10:31:02.853423 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853433 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853438 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853443 | orchestrator | 2025-10-09 10:31:02.853448 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:31:02.853453 | orchestrator | Thursday 09 October 2025 10:26:29 +0000 (0:00:00.675) 0:07:14.313 ****** 2025-10-09 10:31:02.853457 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853462 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853467 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853472 | orchestrator | 2025-10-09 10:31:02.853477 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:31:02.853481 | orchestrator | Thursday 09 October 2025 10:26:29 +0000 (0:00:00.716) 0:07:15.030 ****** 2025-10-09 10:31:02.853486 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853491 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853496 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853500 | orchestrator | 2025-10-09 10:31:02.853505 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:31:02.853513 | orchestrator | Thursday 09 October 2025 10:26:30 +0000 (0:00:00.728) 0:07:15.759 ****** 2025-10-09 10:31:02.853518 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853523 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853528 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853533 | orchestrator | 2025-10-09 10:31:02.853538 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:31:02.853558 | orchestrator | Thursday 09 October 2025 10:26:31 +0000 (0:00:00.611) 0:07:16.371 ****** 2025-10-09 10:31:02.853564 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853568 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853573 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853578 | orchestrator | 2025-10-09 10:31:02.853583 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:31:02.853588 | orchestrator | Thursday 09 October 2025 10:26:31 +0000 (0:00:00.334) 0:07:16.705 ****** 2025-10-09 10:31:02.853592 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853597 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853602 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853607 | orchestrator | 2025-10-09 10:31:02.853612 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:31:02.853616 | orchestrator | Thursday 09 October 2025 10:26:31 +0000 (0:00:00.321) 0:07:17.026 ****** 2025-10-09 10:31:02.853621 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853626 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853631 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853635 | orchestrator | 2025-10-09 10:31:02.853640 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:31:02.853645 | orchestrator | Thursday 09 October 2025 10:26:32 +0000 (0:00:00.729) 0:07:17.756 ****** 2025-10-09 10:31:02.853650 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853655 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853659 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853664 | orchestrator | 2025-10-09 10:31:02.853669 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:31:02.853674 | orchestrator | Thursday 09 October 2025 10:26:33 +0000 (0:00:00.696) 0:07:18.453 ****** 2025-10-09 10:31:02.853678 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853683 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853688 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853693 | orchestrator | 2025-10-09 10:31:02.853698 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:31:02.853702 | orchestrator | Thursday 09 October 2025 10:26:33 +0000 (0:00:00.604) 0:07:19.057 ****** 2025-10-09 10:31:02.853707 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853712 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853717 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853725 | orchestrator | 2025-10-09 10:31:02.853730 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:31:02.853735 | orchestrator | Thursday 09 October 2025 10:26:34 +0000 (0:00:00.325) 0:07:19.383 ****** 2025-10-09 10:31:02.853740 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853744 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853749 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853754 | orchestrator | 2025-10-09 10:31:02.853759 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:31:02.853764 | orchestrator | Thursday 09 October 2025 10:26:34 +0000 (0:00:00.337) 0:07:19.720 ****** 2025-10-09 10:31:02.853768 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853773 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853778 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853783 | orchestrator | 2025-10-09 10:31:02.853787 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:31:02.853792 | orchestrator | Thursday 09 October 2025 10:26:34 +0000 (0:00:00.368) 0:07:20.089 ****** 2025-10-09 10:31:02.853797 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853802 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853806 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853811 | orchestrator | 2025-10-09 10:31:02.853816 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:31:02.853820 | orchestrator | Thursday 09 October 2025 10:26:35 +0000 (0:00:00.750) 0:07:20.839 ****** 2025-10-09 10:31:02.853825 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853830 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853835 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853840 | orchestrator | 2025-10-09 10:31:02.853844 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:31:02.853849 | orchestrator | Thursday 09 October 2025 10:26:35 +0000 (0:00:00.444) 0:07:21.284 ****** 2025-10-09 10:31:02.853854 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853859 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853864 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853868 | orchestrator | 2025-10-09 10:31:02.853873 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:31:02.853878 | orchestrator | Thursday 09 October 2025 10:26:36 +0000 (0:00:00.353) 0:07:21.638 ****** 2025-10-09 10:31:02.853883 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.853887 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.853892 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.853897 | orchestrator | 2025-10-09 10:31:02.853902 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:31:02.853907 | orchestrator | Thursday 09 October 2025 10:26:36 +0000 (0:00:00.351) 0:07:21.990 ****** 2025-10-09 10:31:02.853911 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853916 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853921 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853926 | orchestrator | 2025-10-09 10:31:02.853930 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:31:02.853935 | orchestrator | Thursday 09 October 2025 10:26:37 +0000 (0:00:00.727) 0:07:22.717 ****** 2025-10-09 10:31:02.853940 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853945 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853949 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853954 | orchestrator | 2025-10-09 10:31:02.853959 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-10-09 10:31:02.853964 | orchestrator | Thursday 09 October 2025 10:26:38 +0000 (0:00:00.621) 0:07:23.339 ****** 2025-10-09 10:31:02.853968 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.853976 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.853981 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.853985 | orchestrator | 2025-10-09 10:31:02.853990 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-10-09 10:31:02.853999 | orchestrator | Thursday 09 October 2025 10:26:38 +0000 (0:00:00.334) 0:07:23.673 ****** 2025-10-09 10:31:02.854006 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:31:02.854011 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:31:02.854031 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:31:02.854036 | orchestrator | 2025-10-09 10:31:02.854041 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-10-09 10:31:02.854045 | orchestrator | Thursday 09 October 2025 10:26:39 +0000 (0:00:01.235) 0:07:24.909 ****** 2025-10-09 10:31:02.854050 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.854055 | orchestrator | 2025-10-09 10:31:02.854060 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-10-09 10:31:02.854065 | orchestrator | Thursday 09 October 2025 10:26:40 +0000 (0:00:00.571) 0:07:25.481 ****** 2025-10-09 10:31:02.854069 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.854074 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.854079 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.854084 | orchestrator | 2025-10-09 10:31:02.854089 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-10-09 10:31:02.854093 | orchestrator | Thursday 09 October 2025 10:26:40 +0000 (0:00:00.351) 0:07:25.832 ****** 2025-10-09 10:31:02.854098 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.854103 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.854108 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.854112 | orchestrator | 2025-10-09 10:31:02.854117 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-10-09 10:31:02.854122 | orchestrator | Thursday 09 October 2025 10:26:41 +0000 (0:00:00.595) 0:07:26.428 ****** 2025-10-09 10:31:02.854127 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.854132 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.854136 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.854141 | orchestrator | 2025-10-09 10:31:02.854146 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-10-09 10:31:02.854151 | orchestrator | Thursday 09 October 2025 10:26:41 +0000 (0:00:00.659) 0:07:27.088 ****** 2025-10-09 10:31:02.854156 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.854160 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.854165 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.854170 | orchestrator | 2025-10-09 10:31:02.854175 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-10-09 10:31:02.854180 | orchestrator | Thursday 09 October 2025 10:26:42 +0000 (0:00:00.406) 0:07:27.494 ****** 2025-10-09 10:31:02.854184 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-09 10:31:02.854189 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-09 10:31:02.854194 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-09 10:31:02.854199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-09 10:31:02.854204 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-09 10:31:02.854208 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-09 10:31:02.854213 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-09 10:31:02.854229 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-09 10:31:02.854234 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-09 10:31:02.854239 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-09 10:31:02.854248 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-09 10:31:02.854253 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-09 10:31:02.854258 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-09 10:31:02.854263 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-09 10:31:02.854267 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-09 10:31:02.854272 | orchestrator | 2025-10-09 10:31:02.854277 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-10-09 10:31:02.854282 | orchestrator | Thursday 09 October 2025 10:26:44 +0000 (0:00:02.141) 0:07:29.636 ****** 2025-10-09 10:31:02.854286 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.854291 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.854296 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.854301 | orchestrator | 2025-10-09 10:31:02.854306 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-10-09 10:31:02.854310 | orchestrator | Thursday 09 October 2025 10:26:44 +0000 (0:00:00.596) 0:07:30.233 ****** 2025-10-09 10:31:02.854315 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.854320 | orchestrator | 2025-10-09 10:31:02.854325 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-10-09 10:31:02.854332 | orchestrator | Thursday 09 October 2025 10:26:45 +0000 (0:00:00.604) 0:07:30.837 ****** 2025-10-09 10:31:02.854337 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-09 10:31:02.854342 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-09 10:31:02.854347 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-09 10:31:02.854354 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-10-09 10:31:02.854360 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-10-09 10:31:02.854364 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-10-09 10:31:02.854369 | orchestrator | 2025-10-09 10:31:02.854374 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-10-09 10:31:02.854379 | orchestrator | Thursday 09 October 2025 10:26:46 +0000 (0:00:01.017) 0:07:31.854 ****** 2025-10-09 10:31:02.854384 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.854388 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:31:02.854393 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:31:02.854398 | orchestrator | 2025-10-09 10:31:02.854403 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:31:02.854408 | orchestrator | Thursday 09 October 2025 10:26:48 +0000 (0:00:02.010) 0:07:33.865 ****** 2025-10-09 10:31:02.854412 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:31:02.854417 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:31:02.854422 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.854427 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:31:02.854432 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-09 10:31:02.854436 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.854441 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:31:02.854446 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-09 10:31:02.854451 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.854455 | orchestrator | 2025-10-09 10:31:02.854460 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-10-09 10:31:02.854465 | orchestrator | Thursday 09 October 2025 10:26:50 +0000 (0:00:01.913) 0:07:35.778 ****** 2025-10-09 10:31:02.854475 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:31:02.854480 | orchestrator | 2025-10-09 10:31:02.854485 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-10-09 10:31:02.854489 | orchestrator | Thursday 09 October 2025 10:26:52 +0000 (0:00:02.352) 0:07:38.130 ****** 2025-10-09 10:31:02.854494 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.854499 | orchestrator | 2025-10-09 10:31:02.854504 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-10-09 10:31:02.854508 | orchestrator | Thursday 09 October 2025 10:26:53 +0000 (0:00:00.572) 0:07:38.703 ****** 2025-10-09 10:31:02.854513 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ef021fd3-cbd8-5c2d-a302-cb22746ffda4', 'data_vg': 'ceph-ef021fd3-cbd8-5c2d-a302-cb22746ffda4'}) 2025-10-09 10:31:02.854519 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-54591ae8-508f-52be-a28f-431401b6d1ad', 'data_vg': 'ceph-54591ae8-508f-52be-a28f-431401b6d1ad'}) 2025-10-09 10:31:02.854523 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-858e0050-0995-591a-8058-d3b9e3e91008', 'data_vg': 'ceph-858e0050-0995-591a-8058-d3b9e3e91008'}) 2025-10-09 10:31:02.854528 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dd550970-9f11-5339-ba56-252bcc7474b0', 'data_vg': 'ceph-dd550970-9f11-5339-ba56-252bcc7474b0'}) 2025-10-09 10:31:02.854533 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7a727a7b-5e51-5ff8-a4b5-d0215765e108', 'data_vg': 'ceph-7a727a7b-5e51-5ff8-a4b5-d0215765e108'}) 2025-10-09 10:31:02.854538 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b4e7e843-affd-596c-a641-84f7bc574f86', 'data_vg': 'ceph-b4e7e843-affd-596c-a641-84f7bc574f86'}) 2025-10-09 10:31:02.854543 | orchestrator | 2025-10-09 10:31:02.854547 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-10-09 10:31:02.854552 | orchestrator | Thursday 09 October 2025 10:27:32 +0000 (0:00:38.864) 0:08:17.567 ****** 2025-10-09 10:31:02.854557 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.854562 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.854567 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.854571 | orchestrator | 2025-10-09 10:31:02.854576 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-10-09 10:31:02.854581 | orchestrator | Thursday 09 October 2025 10:27:32 +0000 (0:00:00.652) 0:08:18.220 ****** 2025-10-09 10:31:02.854586 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.854591 | orchestrator | 2025-10-09 10:31:02.854595 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-10-09 10:31:02.854600 | orchestrator | Thursday 09 October 2025 10:27:33 +0000 (0:00:00.564) 0:08:18.784 ****** 2025-10-09 10:31:02.854605 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.854610 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.854614 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.854619 | orchestrator | 2025-10-09 10:31:02.854624 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-10-09 10:31:02.854629 | orchestrator | Thursday 09 October 2025 10:27:34 +0000 (0:00:00.632) 0:08:19.417 ****** 2025-10-09 10:31:02.854634 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.854641 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.854646 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.854651 | orchestrator | 2025-10-09 10:31:02.854656 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-10-09 10:31:02.854660 | orchestrator | Thursday 09 October 2025 10:27:36 +0000 (0:00:02.770) 0:08:22.188 ****** 2025-10-09 10:31:02.854667 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.854673 | orchestrator | 2025-10-09 10:31:02.854677 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-10-09 10:31:02.854687 | orchestrator | Thursday 09 October 2025 10:27:37 +0000 (0:00:00.577) 0:08:22.766 ****** 2025-10-09 10:31:02.854692 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.854697 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.854702 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.854706 | orchestrator | 2025-10-09 10:31:02.854711 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-10-09 10:31:02.854716 | orchestrator | Thursday 09 October 2025 10:27:38 +0000 (0:00:01.227) 0:08:23.993 ****** 2025-10-09 10:31:02.854721 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.854726 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.854730 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.854735 | orchestrator | 2025-10-09 10:31:02.854740 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-10-09 10:31:02.854745 | orchestrator | Thursday 09 October 2025 10:27:40 +0000 (0:00:01.458) 0:08:25.452 ****** 2025-10-09 10:31:02.854750 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.854755 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.854759 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.854764 | orchestrator | 2025-10-09 10:31:02.854769 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-10-09 10:31:02.854774 | orchestrator | Thursday 09 October 2025 10:27:41 +0000 (0:00:01.587) 0:08:27.039 ****** 2025-10-09 10:31:02.854779 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.854783 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.854788 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.854793 | orchestrator | 2025-10-09 10:31:02.854798 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-10-09 10:31:02.854803 | orchestrator | Thursday 09 October 2025 10:27:42 +0000 (0:00:00.347) 0:08:27.387 ****** 2025-10-09 10:31:02.854807 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.854812 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.854817 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.854822 | orchestrator | 2025-10-09 10:31:02.854827 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-10-09 10:31:02.854831 | orchestrator | Thursday 09 October 2025 10:27:42 +0000 (0:00:00.336) 0:08:27.724 ****** 2025-10-09 10:31:02.854836 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-10-09 10:31:02.854841 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-10-09 10:31:02.854846 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-10-09 10:31:02.854850 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-10-09 10:31:02.854855 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-09 10:31:02.854860 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-10-09 10:31:02.854865 | orchestrator | 2025-10-09 10:31:02.854870 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-10-09 10:31:02.854875 | orchestrator | Thursday 09 October 2025 10:27:43 +0000 (0:00:01.379) 0:08:29.103 ****** 2025-10-09 10:31:02.854879 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-10-09 10:31:02.854884 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-10-09 10:31:02.854889 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-10-09 10:31:02.854894 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-10-09 10:31:02.854898 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-10-09 10:31:02.854903 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-10-09 10:31:02.854908 | orchestrator | 2025-10-09 10:31:02.854913 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-10-09 10:31:02.854918 | orchestrator | Thursday 09 October 2025 10:27:46 +0000 (0:00:02.211) 0:08:31.315 ****** 2025-10-09 10:31:02.854922 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-10-09 10:31:02.854927 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-10-09 10:31:02.854932 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-10-09 10:31:02.854936 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-10-09 10:31:02.854945 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-10-09 10:31:02.854950 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-10-09 10:31:02.854954 | orchestrator | 2025-10-09 10:31:02.854959 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-10-09 10:31:02.854964 | orchestrator | Thursday 09 October 2025 10:27:49 +0000 (0:00:03.325) 0:08:34.641 ****** 2025-10-09 10:31:02.854969 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.854976 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.854984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:31:02.854992 | orchestrator | 2025-10-09 10:31:02.854999 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-10-09 10:31:02.855004 | orchestrator | Thursday 09 October 2025 10:27:52 +0000 (0:00:03.053) 0:08:37.694 ****** 2025-10-09 10:31:02.855009 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855014 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855019 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-10-09 10:31:02.855024 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:31:02.855028 | orchestrator | 2025-10-09 10:31:02.855033 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-10-09 10:31:02.855038 | orchestrator | Thursday 09 October 2025 10:28:05 +0000 (0:00:12.915) 0:08:50.610 ****** 2025-10-09 10:31:02.855043 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855048 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855052 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855057 | orchestrator | 2025-10-09 10:31:02.855065 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:31:02.855070 | orchestrator | Thursday 09 October 2025 10:28:06 +0000 (0:00:00.939) 0:08:51.549 ****** 2025-10-09 10:31:02.855075 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855079 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855084 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855089 | orchestrator | 2025-10-09 10:31:02.855096 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-10-09 10:31:02.855101 | orchestrator | Thursday 09 October 2025 10:28:06 +0000 (0:00:00.708) 0:08:52.257 ****** 2025-10-09 10:31:02.855106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.855111 | orchestrator | 2025-10-09 10:31:02.855116 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-10-09 10:31:02.855120 | orchestrator | Thursday 09 October 2025 10:28:07 +0000 (0:00:00.572) 0:08:52.830 ****** 2025-10-09 10:31:02.855125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.855130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.855135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.855139 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855144 | orchestrator | 2025-10-09 10:31:02.855149 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-10-09 10:31:02.855154 | orchestrator | Thursday 09 October 2025 10:28:07 +0000 (0:00:00.435) 0:08:53.266 ****** 2025-10-09 10:31:02.855158 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855163 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855168 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855173 | orchestrator | 2025-10-09 10:31:02.855178 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-10-09 10:31:02.855182 | orchestrator | Thursday 09 October 2025 10:28:08 +0000 (0:00:00.636) 0:08:53.903 ****** 2025-10-09 10:31:02.855187 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855192 | orchestrator | 2025-10-09 10:31:02.855197 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-10-09 10:31:02.855202 | orchestrator | Thursday 09 October 2025 10:28:08 +0000 (0:00:00.271) 0:08:54.174 ****** 2025-10-09 10:31:02.855210 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855215 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855229 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855234 | orchestrator | 2025-10-09 10:31:02.855239 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-10-09 10:31:02.855244 | orchestrator | Thursday 09 October 2025 10:28:09 +0000 (0:00:00.354) 0:08:54.529 ****** 2025-10-09 10:31:02.855249 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855253 | orchestrator | 2025-10-09 10:31:02.855258 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-10-09 10:31:02.855263 | orchestrator | Thursday 09 October 2025 10:28:09 +0000 (0:00:00.263) 0:08:54.793 ****** 2025-10-09 10:31:02.855268 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855273 | orchestrator | 2025-10-09 10:31:02.855278 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-10-09 10:31:02.855283 | orchestrator | Thursday 09 October 2025 10:28:09 +0000 (0:00:00.258) 0:08:55.051 ****** 2025-10-09 10:31:02.855287 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855292 | orchestrator | 2025-10-09 10:31:02.855297 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-10-09 10:31:02.855302 | orchestrator | Thursday 09 October 2025 10:28:09 +0000 (0:00:00.212) 0:08:55.264 ****** 2025-10-09 10:31:02.855307 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855312 | orchestrator | 2025-10-09 10:31:02.855316 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-10-09 10:31:02.855321 | orchestrator | Thursday 09 October 2025 10:28:10 +0000 (0:00:00.329) 0:08:55.593 ****** 2025-10-09 10:31:02.855326 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855331 | orchestrator | 2025-10-09 10:31:02.855336 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-10-09 10:31:02.855340 | orchestrator | Thursday 09 October 2025 10:28:10 +0000 (0:00:00.251) 0:08:55.845 ****** 2025-10-09 10:31:02.855345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.855350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.855355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.855360 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855365 | orchestrator | 2025-10-09 10:31:02.855369 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-10-09 10:31:02.855374 | orchestrator | Thursday 09 October 2025 10:28:10 +0000 (0:00:00.431) 0:08:56.277 ****** 2025-10-09 10:31:02.855379 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855384 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855389 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855394 | orchestrator | 2025-10-09 10:31:02.855398 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-10-09 10:31:02.855403 | orchestrator | Thursday 09 October 2025 10:28:11 +0000 (0:00:00.598) 0:08:56.875 ****** 2025-10-09 10:31:02.855408 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855413 | orchestrator | 2025-10-09 10:31:02.855418 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-10-09 10:31:02.855423 | orchestrator | Thursday 09 October 2025 10:28:11 +0000 (0:00:00.227) 0:08:57.103 ****** 2025-10-09 10:31:02.855427 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855432 | orchestrator | 2025-10-09 10:31:02.855437 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-10-09 10:31:02.855442 | orchestrator | 2025-10-09 10:31:02.855447 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:31:02.855452 | orchestrator | Thursday 09 October 2025 10:28:12 +0000 (0:00:00.729) 0:08:57.832 ****** 2025-10-09 10:31:02.855459 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.855469 | orchestrator | 2025-10-09 10:31:02.855473 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:31:02.855478 | orchestrator | Thursday 09 October 2025 10:28:13 +0000 (0:00:01.400) 0:08:59.233 ****** 2025-10-09 10:31:02.855486 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.855491 | orchestrator | 2025-10-09 10:31:02.855496 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:31:02.855500 | orchestrator | Thursday 09 October 2025 10:28:15 +0000 (0:00:01.381) 0:09:00.615 ****** 2025-10-09 10:31:02.855505 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855510 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.855515 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855520 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.855525 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.855529 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855534 | orchestrator | 2025-10-09 10:31:02.855539 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:31:02.855544 | orchestrator | Thursday 09 October 2025 10:28:16 +0000 (0:00:01.038) 0:09:01.653 ****** 2025-10-09 10:31:02.855549 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.855554 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.855558 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.855563 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.855568 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.855573 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.855578 | orchestrator | 2025-10-09 10:31:02.855583 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:31:02.855587 | orchestrator | Thursday 09 October 2025 10:28:17 +0000 (0:00:01.007) 0:09:02.660 ****** 2025-10-09 10:31:02.855592 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.855597 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.855602 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.855607 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.855611 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.855616 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.855621 | orchestrator | 2025-10-09 10:31:02.855626 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:31:02.855631 | orchestrator | Thursday 09 October 2025 10:28:18 +0000 (0:00:01.392) 0:09:04.053 ****** 2025-10-09 10:31:02.855636 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.855641 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.855645 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.855650 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.855655 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.855660 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.855665 | orchestrator | 2025-10-09 10:31:02.855670 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:31:02.855675 | orchestrator | Thursday 09 October 2025 10:28:19 +0000 (0:00:01.051) 0:09:05.104 ****** 2025-10-09 10:31:02.855679 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855684 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.855689 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855694 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.855699 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855704 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.855708 | orchestrator | 2025-10-09 10:31:02.855713 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:31:02.855718 | orchestrator | Thursday 09 October 2025 10:28:20 +0000 (0:00:01.000) 0:09:06.105 ****** 2025-10-09 10:31:02.855723 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.855728 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.855733 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.855741 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855746 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855751 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855755 | orchestrator | 2025-10-09 10:31:02.855760 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:31:02.855765 | orchestrator | Thursday 09 October 2025 10:28:21 +0000 (0:00:00.671) 0:09:06.776 ****** 2025-10-09 10:31:02.855770 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.855775 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.855780 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.855784 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855789 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855794 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855799 | orchestrator | 2025-10-09 10:31:02.855803 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:31:02.855808 | orchestrator | Thursday 09 October 2025 10:28:22 +0000 (0:00:00.905) 0:09:07.682 ****** 2025-10-09 10:31:02.855813 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.855818 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.855823 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.855828 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.855833 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.855837 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.855842 | orchestrator | 2025-10-09 10:31:02.855847 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:31:02.855852 | orchestrator | Thursday 09 October 2025 10:28:23 +0000 (0:00:00.980) 0:09:08.662 ****** 2025-10-09 10:31:02.855857 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.855862 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.855866 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.855871 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.855876 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.855880 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.855885 | orchestrator | 2025-10-09 10:31:02.855890 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:31:02.855895 | orchestrator | Thursday 09 October 2025 10:28:24 +0000 (0:00:01.214) 0:09:09.877 ****** 2025-10-09 10:31:02.855900 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.855905 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.855910 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.855914 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855922 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855927 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855932 | orchestrator | 2025-10-09 10:31:02.855937 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:31:02.855942 | orchestrator | Thursday 09 October 2025 10:28:25 +0000 (0:00:00.669) 0:09:10.547 ****** 2025-10-09 10:31:02.855947 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.855953 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.855959 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.855963 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.855968 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.855973 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.855978 | orchestrator | 2025-10-09 10:31:02.855983 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:31:02.855988 | orchestrator | Thursday 09 October 2025 10:28:25 +0000 (0:00:00.598) 0:09:11.146 ****** 2025-10-09 10:31:02.855992 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.855997 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.856002 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.856007 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856012 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856016 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856021 | orchestrator | 2025-10-09 10:31:02.856026 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:31:02.856034 | orchestrator | Thursday 09 October 2025 10:28:26 +0000 (0:00:00.947) 0:09:12.093 ****** 2025-10-09 10:31:02.856039 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.856044 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.856049 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.856053 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856058 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856063 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856068 | orchestrator | 2025-10-09 10:31:02.856073 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:31:02.856078 | orchestrator | Thursday 09 October 2025 10:28:27 +0000 (0:00:00.625) 0:09:12.719 ****** 2025-10-09 10:31:02.856083 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.856088 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.856092 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.856097 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856102 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856107 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856112 | orchestrator | 2025-10-09 10:31:02.856117 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:31:02.856122 | orchestrator | Thursday 09 October 2025 10:28:28 +0000 (0:00:00.954) 0:09:13.674 ****** 2025-10-09 10:31:02.856126 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.856131 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.856136 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.856141 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.856146 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.856150 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.856155 | orchestrator | 2025-10-09 10:31:02.856160 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:31:02.856165 | orchestrator | Thursday 09 October 2025 10:28:29 +0000 (0:00:00.651) 0:09:14.325 ****** 2025-10-09 10:31:02.856170 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:31:02.856175 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:31:02.856179 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:31:02.856184 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.856189 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.856194 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.856199 | orchestrator | 2025-10-09 10:31:02.856204 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:31:02.856209 | orchestrator | Thursday 09 October 2025 10:28:29 +0000 (0:00:00.898) 0:09:15.223 ****** 2025-10-09 10:31:02.856213 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.856227 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.856232 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.856237 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.856241 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.856246 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.856251 | orchestrator | 2025-10-09 10:31:02.856256 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:31:02.856261 | orchestrator | Thursday 09 October 2025 10:28:30 +0000 (0:00:00.719) 0:09:15.942 ****** 2025-10-09 10:31:02.856265 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.856270 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.856275 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.856280 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856285 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856289 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856294 | orchestrator | 2025-10-09 10:31:02.856299 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:31:02.856304 | orchestrator | Thursday 09 October 2025 10:28:31 +0000 (0:00:00.981) 0:09:16.924 ****** 2025-10-09 10:31:02.856309 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.856317 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.856322 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.856327 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856332 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856336 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856341 | orchestrator | 2025-10-09 10:31:02.856346 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-10-09 10:31:02.856351 | orchestrator | Thursday 09 October 2025 10:28:33 +0000 (0:00:01.371) 0:09:18.296 ****** 2025-10-09 10:31:02.856356 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.856360 | orchestrator | 2025-10-09 10:31:02.856365 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-10-09 10:31:02.856370 | orchestrator | Thursday 09 October 2025 10:28:37 +0000 (0:00:04.034) 0:09:22.330 ****** 2025-10-09 10:31:02.856375 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.856380 | orchestrator | 2025-10-09 10:31:02.856384 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-10-09 10:31:02.856389 | orchestrator | Thursday 09 October 2025 10:28:38 +0000 (0:00:01.889) 0:09:24.220 ****** 2025-10-09 10:31:02.856394 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.856399 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.856404 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.856411 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.856416 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.856421 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.856426 | orchestrator | 2025-10-09 10:31:02.856431 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-10-09 10:31:02.856435 | orchestrator | Thursday 09 October 2025 10:28:40 +0000 (0:00:01.711) 0:09:25.931 ****** 2025-10-09 10:31:02.856442 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.856447 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.856452 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.856457 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.856462 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.856467 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.856471 | orchestrator | 2025-10-09 10:31:02.856476 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-10-09 10:31:02.856481 | orchestrator | Thursday 09 October 2025 10:28:41 +0000 (0:00:01.124) 0:09:27.056 ****** 2025-10-09 10:31:02.856486 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.856492 | orchestrator | 2025-10-09 10:31:02.856497 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-10-09 10:31:02.856501 | orchestrator | Thursday 09 October 2025 10:28:43 +0000 (0:00:01.470) 0:09:28.526 ****** 2025-10-09 10:31:02.856506 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.856511 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.856516 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.856521 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.856525 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.856530 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.856535 | orchestrator | 2025-10-09 10:31:02.856540 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-10-09 10:31:02.856545 | orchestrator | Thursday 09 October 2025 10:28:44 +0000 (0:00:01.520) 0:09:30.047 ****** 2025-10-09 10:31:02.856550 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.856554 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.856559 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.856564 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.856569 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.856573 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.856578 | orchestrator | 2025-10-09 10:31:02.856583 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-10-09 10:31:02.856592 | orchestrator | Thursday 09 October 2025 10:28:48 +0000 (0:00:04.142) 0:09:34.190 ****** 2025-10-09 10:31:02.856597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.856602 | orchestrator | 2025-10-09 10:31:02.856607 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-10-09 10:31:02.856612 | orchestrator | Thursday 09 October 2025 10:28:50 +0000 (0:00:01.324) 0:09:35.514 ****** 2025-10-09 10:31:02.856616 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.856621 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.856626 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.856631 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856635 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856640 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856645 | orchestrator | 2025-10-09 10:31:02.856650 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-10-09 10:31:02.856655 | orchestrator | Thursday 09 October 2025 10:28:50 +0000 (0:00:00.668) 0:09:36.183 ****** 2025-10-09 10:31:02.856659 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:31:02.856664 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:31:02.856669 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:31:02.856674 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.856679 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.856684 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.856689 | orchestrator | 2025-10-09 10:31:02.856693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-10-09 10:31:02.856698 | orchestrator | Thursday 09 October 2025 10:28:53 +0000 (0:00:02.443) 0:09:38.627 ****** 2025-10-09 10:31:02.856703 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:31:02.856708 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:31:02.856712 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:31:02.856717 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856722 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856727 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856731 | orchestrator | 2025-10-09 10:31:02.856736 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-10-09 10:31:02.856741 | orchestrator | 2025-10-09 10:31:02.856746 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:31:02.856751 | orchestrator | Thursday 09 October 2025 10:28:54 +0000 (0:00:01.166) 0:09:39.793 ****** 2025-10-09 10:31:02.856756 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.856760 | orchestrator | 2025-10-09 10:31:02.856765 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:31:02.856770 | orchestrator | Thursday 09 October 2025 10:28:55 +0000 (0:00:00.517) 0:09:40.311 ****** 2025-10-09 10:31:02.856775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.856780 | orchestrator | 2025-10-09 10:31:02.856785 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:31:02.856789 | orchestrator | Thursday 09 October 2025 10:28:55 +0000 (0:00:00.763) 0:09:41.075 ****** 2025-10-09 10:31:02.856794 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.856799 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.856804 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.856809 | orchestrator | 2025-10-09 10:31:02.856814 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:31:02.856821 | orchestrator | Thursday 09 October 2025 10:28:56 +0000 (0:00:00.330) 0:09:41.406 ****** 2025-10-09 10:31:02.856826 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856831 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856836 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856840 | orchestrator | 2025-10-09 10:31:02.856845 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:31:02.856857 | orchestrator | Thursday 09 October 2025 10:28:56 +0000 (0:00:00.765) 0:09:42.171 ****** 2025-10-09 10:31:02.856863 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856867 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856872 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856877 | orchestrator | 2025-10-09 10:31:02.856882 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:31:02.856887 | orchestrator | Thursday 09 October 2025 10:28:57 +0000 (0:00:00.754) 0:09:42.925 ****** 2025-10-09 10:31:02.856891 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.856896 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.856901 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.856906 | orchestrator | 2025-10-09 10:31:02.856911 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:31:02.856915 | orchestrator | Thursday 09 October 2025 10:28:58 +0000 (0:00:01.092) 0:09:44.018 ****** 2025-10-09 10:31:02.856920 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.856925 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.856930 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.856935 | orchestrator | 2025-10-09 10:31:02.856940 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:31:02.856944 | orchestrator | Thursday 09 October 2025 10:28:59 +0000 (0:00:00.318) 0:09:44.337 ****** 2025-10-09 10:31:02.856949 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.856954 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.856959 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.856964 | orchestrator | 2025-10-09 10:31:02.856968 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:31:02.856973 | orchestrator | Thursday 09 October 2025 10:28:59 +0000 (0:00:00.307) 0:09:44.644 ****** 2025-10-09 10:31:02.856978 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.856983 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.856988 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.856992 | orchestrator | 2025-10-09 10:31:02.856997 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:31:02.857002 | orchestrator | Thursday 09 October 2025 10:28:59 +0000 (0:00:00.323) 0:09:44.968 ****** 2025-10-09 10:31:02.857007 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857012 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857016 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857021 | orchestrator | 2025-10-09 10:31:02.857026 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:31:02.857031 | orchestrator | Thursday 09 October 2025 10:29:00 +0000 (0:00:01.032) 0:09:46.001 ****** 2025-10-09 10:31:02.857036 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857041 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857045 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857050 | orchestrator | 2025-10-09 10:31:02.857055 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:31:02.857060 | orchestrator | Thursday 09 October 2025 10:29:01 +0000 (0:00:00.772) 0:09:46.773 ****** 2025-10-09 10:31:02.857064 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.857069 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.857074 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.857079 | orchestrator | 2025-10-09 10:31:02.857084 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:31:02.857089 | orchestrator | Thursday 09 October 2025 10:29:01 +0000 (0:00:00.354) 0:09:47.127 ****** 2025-10-09 10:31:02.857093 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.857098 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.857103 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.857108 | orchestrator | 2025-10-09 10:31:02.857113 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:31:02.857118 | orchestrator | Thursday 09 October 2025 10:29:02 +0000 (0:00:00.313) 0:09:47.441 ****** 2025-10-09 10:31:02.857125 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857130 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857135 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857140 | orchestrator | 2025-10-09 10:31:02.857145 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:31:02.857149 | orchestrator | Thursday 09 October 2025 10:29:02 +0000 (0:00:00.626) 0:09:48.068 ****** 2025-10-09 10:31:02.857154 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857159 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857164 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857168 | orchestrator | 2025-10-09 10:31:02.857173 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:31:02.857178 | orchestrator | Thursday 09 October 2025 10:29:03 +0000 (0:00:00.351) 0:09:48.419 ****** 2025-10-09 10:31:02.857183 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857188 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857192 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857197 | orchestrator | 2025-10-09 10:31:02.857202 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:31:02.857207 | orchestrator | Thursday 09 October 2025 10:29:03 +0000 (0:00:00.344) 0:09:48.765 ****** 2025-10-09 10:31:02.857212 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.857269 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.857275 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.857280 | orchestrator | 2025-10-09 10:31:02.857285 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:31:02.857290 | orchestrator | Thursday 09 October 2025 10:29:03 +0000 (0:00:00.309) 0:09:49.074 ****** 2025-10-09 10:31:02.857295 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.857300 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.857304 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.857309 | orchestrator | 2025-10-09 10:31:02.857314 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:31:02.857319 | orchestrator | Thursday 09 October 2025 10:29:04 +0000 (0:00:00.600) 0:09:49.675 ****** 2025-10-09 10:31:02.857324 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.857332 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.857336 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.857341 | orchestrator | 2025-10-09 10:31:02.857346 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:31:02.857351 | orchestrator | Thursday 09 October 2025 10:29:04 +0000 (0:00:00.309) 0:09:49.984 ****** 2025-10-09 10:31:02.857356 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857364 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857369 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857373 | orchestrator | 2025-10-09 10:31:02.857378 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:31:02.857383 | orchestrator | Thursday 09 October 2025 10:29:05 +0000 (0:00:00.403) 0:09:50.387 ****** 2025-10-09 10:31:02.857388 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857393 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857398 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857402 | orchestrator | 2025-10-09 10:31:02.857407 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-10-09 10:31:02.857412 | orchestrator | Thursday 09 October 2025 10:29:05 +0000 (0:00:00.866) 0:09:51.254 ****** 2025-10-09 10:31:02.857417 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.857422 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.857427 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-10-09 10:31:02.857431 | orchestrator | 2025-10-09 10:31:02.857436 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-10-09 10:31:02.857441 | orchestrator | Thursday 09 October 2025 10:29:06 +0000 (0:00:00.483) 0:09:51.738 ****** 2025-10-09 10:31:02.857450 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:31:02.857454 | orchestrator | 2025-10-09 10:31:02.857459 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-10-09 10:31:02.857464 | orchestrator | Thursday 09 October 2025 10:29:08 +0000 (0:00:02.366) 0:09:54.104 ****** 2025-10-09 10:31:02.857470 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-10-09 10:31:02.857476 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.857480 | orchestrator | 2025-10-09 10:31:02.857485 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-10-09 10:31:02.857490 | orchestrator | Thursday 09 October 2025 10:29:09 +0000 (0:00:00.219) 0:09:54.324 ****** 2025-10-09 10:31:02.857495 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:31:02.857504 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:31:02.857509 | orchestrator | 2025-10-09 10:31:02.857514 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-10-09 10:31:02.857518 | orchestrator | Thursday 09 October 2025 10:29:17 +0000 (0:00:08.908) 0:10:03.232 ****** 2025-10-09 10:31:02.857523 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:31:02.857528 | orchestrator | 2025-10-09 10:31:02.857533 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-10-09 10:31:02.857538 | orchestrator | Thursday 09 October 2025 10:29:21 +0000 (0:00:03.789) 0:10:07.022 ****** 2025-10-09 10:31:02.857543 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.857547 | orchestrator | 2025-10-09 10:31:02.857552 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-10-09 10:31:02.857557 | orchestrator | Thursday 09 October 2025 10:29:22 +0000 (0:00:00.949) 0:10:07.972 ****** 2025-10-09 10:31:02.857562 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-09 10:31:02.857567 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-09 10:31:02.857572 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-09 10:31:02.857576 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-10-09 10:31:02.857581 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-10-09 10:31:02.857586 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-10-09 10:31:02.857591 | orchestrator | 2025-10-09 10:31:02.857596 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-10-09 10:31:02.857600 | orchestrator | Thursday 09 October 2025 10:29:23 +0000 (0:00:01.137) 0:10:09.109 ****** 2025-10-09 10:31:02.857605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.857610 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:31:02.857615 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:31:02.857619 | orchestrator | 2025-10-09 10:31:02.857624 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:31:02.857629 | orchestrator | Thursday 09 October 2025 10:29:25 +0000 (0:00:02.141) 0:10:11.250 ****** 2025-10-09 10:31:02.857634 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:31:02.857639 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:31:02.857650 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.857655 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:31:02.857660 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-09 10:31:02.857665 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.857670 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:31:02.857674 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-09 10:31:02.857681 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.857687 | orchestrator | 2025-10-09 10:31:02.857691 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-10-09 10:31:02.857696 | orchestrator | Thursday 09 October 2025 10:29:27 +0000 (0:00:01.258) 0:10:12.509 ****** 2025-10-09 10:31:02.857701 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.857706 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.857711 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.857716 | orchestrator | 2025-10-09 10:31:02.857720 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-10-09 10:31:02.857725 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:03.107) 0:10:15.616 ****** 2025-10-09 10:31:02.857730 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.857735 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.857740 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.857744 | orchestrator | 2025-10-09 10:31:02.857749 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-10-09 10:31:02.857754 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:00.340) 0:10:15.957 ****** 2025-10-09 10:31:02.857759 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.857764 | orchestrator | 2025-10-09 10:31:02.857769 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-10-09 10:31:02.857773 | orchestrator | Thursday 09 October 2025 10:29:31 +0000 (0:00:00.567) 0:10:16.525 ****** 2025-10-09 10:31:02.857778 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.857783 | orchestrator | 2025-10-09 10:31:02.857788 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-10-09 10:31:02.857793 | orchestrator | Thursday 09 October 2025 10:29:32 +0000 (0:00:00.838) 0:10:17.364 ****** 2025-10-09 10:31:02.857797 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.857802 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.857807 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.857811 | orchestrator | 2025-10-09 10:31:02.857816 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-10-09 10:31:02.857820 | orchestrator | Thursday 09 October 2025 10:29:33 +0000 (0:00:01.336) 0:10:18.700 ****** 2025-10-09 10:31:02.857825 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.857830 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.857834 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.857839 | orchestrator | 2025-10-09 10:31:02.857843 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-10-09 10:31:02.857848 | orchestrator | Thursday 09 October 2025 10:29:34 +0000 (0:00:01.186) 0:10:19.887 ****** 2025-10-09 10:31:02.857852 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.857857 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.857861 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.857866 | orchestrator | 2025-10-09 10:31:02.857870 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-10-09 10:31:02.857875 | orchestrator | Thursday 09 October 2025 10:29:36 +0000 (0:00:02.111) 0:10:21.998 ****** 2025-10-09 10:31:02.857879 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.857884 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.857889 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.857893 | orchestrator | 2025-10-09 10:31:02.857902 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-10-09 10:31:02.857907 | orchestrator | Thursday 09 October 2025 10:29:38 +0000 (0:00:02.218) 0:10:24.217 ****** 2025-10-09 10:31:02.857911 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857916 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857920 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857925 | orchestrator | 2025-10-09 10:31:02.857929 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:31:02.857934 | orchestrator | Thursday 09 October 2025 10:29:40 +0000 (0:00:01.766) 0:10:25.984 ****** 2025-10-09 10:31:02.857939 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.857943 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.857948 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.857952 | orchestrator | 2025-10-09 10:31:02.857957 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-10-09 10:31:02.857962 | orchestrator | Thursday 09 October 2025 10:29:41 +0000 (0:00:01.241) 0:10:27.226 ****** 2025-10-09 10:31:02.857966 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.857971 | orchestrator | 2025-10-09 10:31:02.857975 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-10-09 10:31:02.857980 | orchestrator | Thursday 09 October 2025 10:29:42 +0000 (0:00:00.740) 0:10:27.966 ****** 2025-10-09 10:31:02.857984 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.857989 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.857994 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.857998 | orchestrator | 2025-10-09 10:31:02.858003 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-10-09 10:31:02.858007 | orchestrator | Thursday 09 October 2025 10:29:43 +0000 (0:00:00.703) 0:10:28.670 ****** 2025-10-09 10:31:02.858024 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.858030 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.858034 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.858039 | orchestrator | 2025-10-09 10:31:02.858044 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-10-09 10:31:02.858048 | orchestrator | Thursday 09 October 2025 10:29:44 +0000 (0:00:01.399) 0:10:30.069 ****** 2025-10-09 10:31:02.858053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.858060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.858065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.858070 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858074 | orchestrator | 2025-10-09 10:31:02.858079 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-10-09 10:31:02.858086 | orchestrator | Thursday 09 October 2025 10:29:45 +0000 (0:00:01.061) 0:10:31.131 ****** 2025-10-09 10:31:02.858091 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858095 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858100 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858105 | orchestrator | 2025-10-09 10:31:02.858109 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-10-09 10:31:02.858114 | orchestrator | 2025-10-09 10:31:02.858118 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:31:02.858123 | orchestrator | Thursday 09 October 2025 10:29:46 +0000 (0:00:00.632) 0:10:31.764 ****** 2025-10-09 10:31:02.858128 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.858132 | orchestrator | 2025-10-09 10:31:02.858137 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:31:02.858141 | orchestrator | Thursday 09 October 2025 10:29:47 +0000 (0:00:00.928) 0:10:32.692 ****** 2025-10-09 10:31:02.858146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.858154 | orchestrator | 2025-10-09 10:31:02.858158 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:31:02.858163 | orchestrator | Thursday 09 October 2025 10:29:47 +0000 (0:00:00.504) 0:10:33.196 ****** 2025-10-09 10:31:02.858168 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858172 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858177 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858181 | orchestrator | 2025-10-09 10:31:02.858186 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:31:02.858190 | orchestrator | Thursday 09 October 2025 10:29:48 +0000 (0:00:00.460) 0:10:33.657 ****** 2025-10-09 10:31:02.858195 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858200 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858204 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858209 | orchestrator | 2025-10-09 10:31:02.858213 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:31:02.858228 | orchestrator | Thursday 09 October 2025 10:29:49 +0000 (0:00:00.765) 0:10:34.423 ****** 2025-10-09 10:31:02.858233 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858237 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858242 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858246 | orchestrator | 2025-10-09 10:31:02.858251 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:31:02.858256 | orchestrator | Thursday 09 October 2025 10:29:49 +0000 (0:00:00.728) 0:10:35.151 ****** 2025-10-09 10:31:02.858260 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858265 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858269 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858274 | orchestrator | 2025-10-09 10:31:02.858278 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:31:02.858283 | orchestrator | Thursday 09 October 2025 10:29:50 +0000 (0:00:00.785) 0:10:35.936 ****** 2025-10-09 10:31:02.858287 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858292 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858297 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858301 | orchestrator | 2025-10-09 10:31:02.858306 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:31:02.858310 | orchestrator | Thursday 09 October 2025 10:29:51 +0000 (0:00:00.484) 0:10:36.421 ****** 2025-10-09 10:31:02.858315 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858319 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858324 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858329 | orchestrator | 2025-10-09 10:31:02.858333 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:31:02.858338 | orchestrator | Thursday 09 October 2025 10:29:51 +0000 (0:00:00.322) 0:10:36.743 ****** 2025-10-09 10:31:02.858342 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858347 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858353 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858361 | orchestrator | 2025-10-09 10:31:02.858367 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:31:02.858372 | orchestrator | Thursday 09 October 2025 10:29:51 +0000 (0:00:00.338) 0:10:37.082 ****** 2025-10-09 10:31:02.858376 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858381 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858385 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858390 | orchestrator | 2025-10-09 10:31:02.858395 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:31:02.858399 | orchestrator | Thursday 09 October 2025 10:29:52 +0000 (0:00:00.780) 0:10:37.863 ****** 2025-10-09 10:31:02.858404 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858408 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858413 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858418 | orchestrator | 2025-10-09 10:31:02.858422 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:31:02.858431 | orchestrator | Thursday 09 October 2025 10:29:53 +0000 (0:00:01.101) 0:10:38.964 ****** 2025-10-09 10:31:02.858435 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858440 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858444 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858449 | orchestrator | 2025-10-09 10:31:02.858453 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:31:02.858458 | orchestrator | Thursday 09 October 2025 10:29:53 +0000 (0:00:00.315) 0:10:39.280 ****** 2025-10-09 10:31:02.858462 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858467 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858472 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858476 | orchestrator | 2025-10-09 10:31:02.858483 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:31:02.858488 | orchestrator | Thursday 09 October 2025 10:29:54 +0000 (0:00:00.340) 0:10:39.620 ****** 2025-10-09 10:31:02.858493 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858497 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858502 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858506 | orchestrator | 2025-10-09 10:31:02.858514 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:31:02.858518 | orchestrator | Thursday 09 October 2025 10:29:54 +0000 (0:00:00.363) 0:10:39.984 ****** 2025-10-09 10:31:02.858523 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858527 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858532 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858537 | orchestrator | 2025-10-09 10:31:02.858541 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:31:02.858546 | orchestrator | Thursday 09 October 2025 10:29:55 +0000 (0:00:00.659) 0:10:40.643 ****** 2025-10-09 10:31:02.858550 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858555 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858559 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858564 | orchestrator | 2025-10-09 10:31:02.858568 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:31:02.858573 | orchestrator | Thursday 09 October 2025 10:29:55 +0000 (0:00:00.385) 0:10:41.029 ****** 2025-10-09 10:31:02.858577 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858582 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858587 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858591 | orchestrator | 2025-10-09 10:31:02.858596 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:31:02.858600 | orchestrator | Thursday 09 October 2025 10:29:56 +0000 (0:00:00.356) 0:10:41.385 ****** 2025-10-09 10:31:02.858605 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858609 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858614 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858619 | orchestrator | 2025-10-09 10:31:02.858623 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:31:02.858628 | orchestrator | Thursday 09 October 2025 10:29:56 +0000 (0:00:00.333) 0:10:41.718 ****** 2025-10-09 10:31:02.858632 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858637 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858641 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858646 | orchestrator | 2025-10-09 10:31:02.858650 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:31:02.858655 | orchestrator | Thursday 09 October 2025 10:29:57 +0000 (0:00:00.597) 0:10:42.316 ****** 2025-10-09 10:31:02.858660 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858664 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858669 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858673 | orchestrator | 2025-10-09 10:31:02.858678 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:31:02.858682 | orchestrator | Thursday 09 October 2025 10:29:57 +0000 (0:00:00.366) 0:10:42.683 ****** 2025-10-09 10:31:02.858690 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.858695 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.858699 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.858704 | orchestrator | 2025-10-09 10:31:02.858708 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-10-09 10:31:02.858713 | orchestrator | Thursday 09 October 2025 10:29:57 +0000 (0:00:00.556) 0:10:43.239 ****** 2025-10-09 10:31:02.858717 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.858722 | orchestrator | 2025-10-09 10:31:02.858727 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-10-09 10:31:02.858731 | orchestrator | Thursday 09 October 2025 10:29:58 +0000 (0:00:00.905) 0:10:44.145 ****** 2025-10-09 10:31:02.858736 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.858740 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:31:02.858745 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:31:02.858749 | orchestrator | 2025-10-09 10:31:02.858754 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:31:02.858758 | orchestrator | Thursday 09 October 2025 10:30:01 +0000 (0:00:02.323) 0:10:46.468 ****** 2025-10-09 10:31:02.858763 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:31:02.858767 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:31:02.858772 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.858777 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:31:02.858781 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-09 10:31:02.858786 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.858790 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:31:02.858795 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-09 10:31:02.858800 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.858804 | orchestrator | 2025-10-09 10:31:02.858809 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-10-09 10:31:02.858813 | orchestrator | Thursday 09 October 2025 10:30:02 +0000 (0:00:01.228) 0:10:47.697 ****** 2025-10-09 10:31:02.858818 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.858822 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.858827 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.858831 | orchestrator | 2025-10-09 10:31:02.858836 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-10-09 10:31:02.858840 | orchestrator | Thursday 09 October 2025 10:30:02 +0000 (0:00:00.377) 0:10:48.074 ****** 2025-10-09 10:31:02.858845 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.858850 | orchestrator | 2025-10-09 10:31:02.858854 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-10-09 10:31:02.858859 | orchestrator | Thursday 09 October 2025 10:30:03 +0000 (0:00:00.882) 0:10:48.957 ****** 2025-10-09 10:31:02.858866 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.858873 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.858878 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.858883 | orchestrator | 2025-10-09 10:31:02.858887 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-10-09 10:31:02.858892 | orchestrator | Thursday 09 October 2025 10:30:04 +0000 (0:00:00.823) 0:10:49.781 ****** 2025-10-09 10:31:02.858897 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.858905 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-09 10:31:02.858909 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.858914 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-09 10:31:02.858918 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.858923 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-09 10:31:02.858928 | orchestrator | 2025-10-09 10:31:02.858932 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-10-09 10:31:02.858937 | orchestrator | Thursday 09 October 2025 10:30:09 +0000 (0:00:04.836) 0:10:54.618 ****** 2025-10-09 10:31:02.858941 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.858946 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:31:02.858950 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.858955 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:31:02.858959 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:31:02.858964 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:31:02.858968 | orchestrator | 2025-10-09 10:31:02.858973 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:31:02.858978 | orchestrator | Thursday 09 October 2025 10:30:12 +0000 (0:00:02.821) 0:10:57.440 ****** 2025-10-09 10:31:02.858982 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:31:02.858987 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.858991 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:31:02.858996 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.859001 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:31:02.859005 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.859010 | orchestrator | 2025-10-09 10:31:02.859014 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-10-09 10:31:02.859019 | orchestrator | Thursday 09 October 2025 10:30:13 +0000 (0:00:01.272) 0:10:58.712 ****** 2025-10-09 10:31:02.859023 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-10-09 10:31:02.859028 | orchestrator | 2025-10-09 10:31:02.859033 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-10-09 10:31:02.859037 | orchestrator | Thursday 09 October 2025 10:30:13 +0000 (0:00:00.266) 0:10:58.979 ****** 2025-10-09 10:31:02.859042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859065 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.859069 | orchestrator | 2025-10-09 10:31:02.859074 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-10-09 10:31:02.859079 | orchestrator | Thursday 09 October 2025 10:30:14 +0000 (0:00:00.879) 0:10:59.859 ****** 2025-10-09 10:31:02.859086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:31:02.859114 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.859118 | orchestrator | 2025-10-09 10:31:02.859125 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-10-09 10:31:02.859130 | orchestrator | Thursday 09 October 2025 10:30:15 +0000 (0:00:00.918) 0:11:00.777 ****** 2025-10-09 10:31:02.859135 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:31:02.859139 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:31:02.859144 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:31:02.859149 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:31:02.859153 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:31:02.859158 | orchestrator | 2025-10-09 10:31:02.859162 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-10-09 10:31:02.859167 | orchestrator | Thursday 09 October 2025 10:30:46 +0000 (0:00:31.486) 0:11:32.263 ****** 2025-10-09 10:31:02.859172 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.859176 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.859181 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.859185 | orchestrator | 2025-10-09 10:31:02.859190 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-10-09 10:31:02.859195 | orchestrator | Thursday 09 October 2025 10:30:47 +0000 (0:00:00.593) 0:11:32.857 ****** 2025-10-09 10:31:02.859199 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.859204 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.859208 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.859213 | orchestrator | 2025-10-09 10:31:02.859228 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-10-09 10:31:02.859233 | orchestrator | Thursday 09 October 2025 10:30:47 +0000 (0:00:00.351) 0:11:33.209 ****** 2025-10-09 10:31:02.859237 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.859242 | orchestrator | 2025-10-09 10:31:02.859247 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-10-09 10:31:02.859251 | orchestrator | Thursday 09 October 2025 10:30:48 +0000 (0:00:00.541) 0:11:33.751 ****** 2025-10-09 10:31:02.859256 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.859260 | orchestrator | 2025-10-09 10:31:02.859265 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-10-09 10:31:02.859269 | orchestrator | Thursday 09 October 2025 10:30:49 +0000 (0:00:00.813) 0:11:34.564 ****** 2025-10-09 10:31:02.859277 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.859282 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.859286 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.859291 | orchestrator | 2025-10-09 10:31:02.859295 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-10-09 10:31:02.859300 | orchestrator | Thursday 09 October 2025 10:30:50 +0000 (0:00:01.484) 0:11:36.049 ****** 2025-10-09 10:31:02.859305 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.859309 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.859314 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.859318 | orchestrator | 2025-10-09 10:31:02.859323 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-10-09 10:31:02.859327 | orchestrator | Thursday 09 October 2025 10:30:52 +0000 (0:00:01.279) 0:11:37.328 ****** 2025-10-09 10:31:02.859332 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:31:02.859336 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:31:02.859341 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:31:02.859345 | orchestrator | 2025-10-09 10:31:02.859350 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-10-09 10:31:02.859355 | orchestrator | Thursday 09 October 2025 10:30:54 +0000 (0:00:02.208) 0:11:39.537 ****** 2025-10-09 10:31:02.859359 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.859364 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.859368 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:31:02.859373 | orchestrator | 2025-10-09 10:31:02.859378 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:31:02.859382 | orchestrator | Thursday 09 October 2025 10:30:56 +0000 (0:00:02.569) 0:11:42.107 ****** 2025-10-09 10:31:02.859387 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.859391 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.859396 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.859401 | orchestrator | 2025-10-09 10:31:02.859405 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-10-09 10:31:02.859410 | orchestrator | Thursday 09 October 2025 10:30:57 +0000 (0:00:00.680) 0:11:42.787 ****** 2025-10-09 10:31:02.859415 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:31:02.859419 | orchestrator | 2025-10-09 10:31:02.859426 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-10-09 10:31:02.859431 | orchestrator | Thursday 09 October 2025 10:30:58 +0000 (0:00:00.575) 0:11:43.362 ****** 2025-10-09 10:31:02.859435 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.859440 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.859444 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.859449 | orchestrator | 2025-10-09 10:31:02.859454 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-10-09 10:31:02.859458 | orchestrator | Thursday 09 October 2025 10:30:58 +0000 (0:00:00.351) 0:11:43.714 ****** 2025-10-09 10:31:02.859463 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.859467 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:31:02.859472 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:31:02.859477 | orchestrator | 2025-10-09 10:31:02.859481 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-10-09 10:31:02.859486 | orchestrator | Thursday 09 October 2025 10:30:59 +0000 (0:00:00.630) 0:11:44.344 ****** 2025-10-09 10:31:02.859490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:31:02.859495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:31:02.859499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:31:02.859507 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:31:02.859512 | orchestrator | 2025-10-09 10:31:02.859516 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-10-09 10:31:02.859521 | orchestrator | Thursday 09 October 2025 10:30:59 +0000 (0:00:00.693) 0:11:45.038 ****** 2025-10-09 10:31:02.859526 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:31:02.859530 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:31:02.859535 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:31:02.859539 | orchestrator | 2025-10-09 10:31:02.859544 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:31:02.859548 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-10-09 10:31:02.859553 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-10-09 10:31:02.859558 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-10-09 10:31:02.859562 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-10-09 10:31:02.859567 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-10-09 10:31:02.859572 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-10-09 10:31:02.859576 | orchestrator | 2025-10-09 10:31:02.859581 | orchestrator | 2025-10-09 10:31:02.859586 | orchestrator | 2025-10-09 10:31:02.859590 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:31:02.859595 | orchestrator | Thursday 09 October 2025 10:31:00 +0000 (0:00:00.258) 0:11:45.296 ****** 2025-10-09 10:31:02.859599 | orchestrator | =============================================================================== 2025-10-09 10:31:02.859604 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 51.65s 2025-10-09 10:31:02.859608 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.86s 2025-10-09 10:31:02.859613 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.49s 2025-10-09 10:31:02.859618 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.28s 2025-10-09 10:31:02.859622 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.86s 2025-10-09 10:31:02.859627 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.92s 2025-10-09 10:31:02.859631 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.54s 2025-10-09 10:31:02.859649 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.14s 2025-10-09 10:31:02.859654 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.91s 2025-10-09 10:31:02.859659 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.46s 2025-10-09 10:31:02.859663 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 6.98s 2025-10-09 10:31:02.859668 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.48s 2025-10-09 10:31:02.859672 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.12s 2025-10-09 10:31:02.859677 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.84s 2025-10-09 10:31:02.859682 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 4.46s 2025-10-09 10:31:02.859686 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.14s 2025-10-09 10:31:02.859693 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.12s 2025-10-09 10:31:02.859701 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.03s 2025-10-09 10:31:02.859706 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.98s 2025-10-09 10:31:02.859710 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.91s 2025-10-09 10:31:02.859718 | orchestrator | 2025-10-09 10:31:02 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:02.859722 | orchestrator | 2025-10-09 10:31:02 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:02.859727 | orchestrator | 2025-10-09 10:31:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:05.887432 | orchestrator | 2025-10-09 10:31:05 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:05.888249 | orchestrator | 2025-10-09 10:31:05 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:05.889532 | orchestrator | 2025-10-09 10:31:05 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:05.889900 | orchestrator | 2025-10-09 10:31:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:08.951019 | orchestrator | 2025-10-09 10:31:08 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:08.954134 | orchestrator | 2025-10-09 10:31:08 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:08.955907 | orchestrator | 2025-10-09 10:31:08 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:08.956517 | orchestrator | 2025-10-09 10:31:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:12.004435 | orchestrator | 2025-10-09 10:31:12 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:12.005424 | orchestrator | 2025-10-09 10:31:12 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:12.007642 | orchestrator | 2025-10-09 10:31:12 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:12.007670 | orchestrator | 2025-10-09 10:31:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:15.065127 | orchestrator | 2025-10-09 10:31:15 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:15.065491 | orchestrator | 2025-10-09 10:31:15 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:15.068026 | orchestrator | 2025-10-09 10:31:15 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:15.068054 | orchestrator | 2025-10-09 10:31:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:18.130744 | orchestrator | 2025-10-09 10:31:18 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:18.130844 | orchestrator | 2025-10-09 10:31:18 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:18.130858 | orchestrator | 2025-10-09 10:31:18 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:18.130870 | orchestrator | 2025-10-09 10:31:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:21.189948 | orchestrator | 2025-10-09 10:31:21 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:21.190196 | orchestrator | 2025-10-09 10:31:21 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:21.193851 | orchestrator | 2025-10-09 10:31:21 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:21.193873 | orchestrator | 2025-10-09 10:31:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:24.224672 | orchestrator | 2025-10-09 10:31:24 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:24.226311 | orchestrator | 2025-10-09 10:31:24 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:24.226347 | orchestrator | 2025-10-09 10:31:24 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:24.226461 | orchestrator | 2025-10-09 10:31:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:27.274353 | orchestrator | 2025-10-09 10:31:27 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:27.275484 | orchestrator | 2025-10-09 10:31:27 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:27.277978 | orchestrator | 2025-10-09 10:31:27 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:27.278364 | orchestrator | 2025-10-09 10:31:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:30.316891 | orchestrator | 2025-10-09 10:31:30 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:30.317925 | orchestrator | 2025-10-09 10:31:30 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:30.319459 | orchestrator | 2025-10-09 10:31:30 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:30.319486 | orchestrator | 2025-10-09 10:31:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:33.368726 | orchestrator | 2025-10-09 10:31:33 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:33.369902 | orchestrator | 2025-10-09 10:31:33 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:33.372773 | orchestrator | 2025-10-09 10:31:33 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:33.372803 | orchestrator | 2025-10-09 10:31:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:36.428798 | orchestrator | 2025-10-09 10:31:36 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:36.431862 | orchestrator | 2025-10-09 10:31:36 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:36.433850 | orchestrator | 2025-10-09 10:31:36 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:36.433950 | orchestrator | 2025-10-09 10:31:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:39.483318 | orchestrator | 2025-10-09 10:31:39 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:39.484042 | orchestrator | 2025-10-09 10:31:39 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:39.486701 | orchestrator | 2025-10-09 10:31:39 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:39.486986 | orchestrator | 2025-10-09 10:31:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:42.534877 | orchestrator | 2025-10-09 10:31:42 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:42.537777 | orchestrator | 2025-10-09 10:31:42 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:42.539495 | orchestrator | 2025-10-09 10:31:42 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:42.539523 | orchestrator | 2025-10-09 10:31:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:45.584647 | orchestrator | 2025-10-09 10:31:45 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:45.585184 | orchestrator | 2025-10-09 10:31:45 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:45.587270 | orchestrator | 2025-10-09 10:31:45 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:45.587296 | orchestrator | 2025-10-09 10:31:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:48.635951 | orchestrator | 2025-10-09 10:31:48 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:48.637961 | orchestrator | 2025-10-09 10:31:48 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:48.639999 | orchestrator | 2025-10-09 10:31:48 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:48.640027 | orchestrator | 2025-10-09 10:31:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:51.683812 | orchestrator | 2025-10-09 10:31:51 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:51.685196 | orchestrator | 2025-10-09 10:31:51 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:51.687421 | orchestrator | 2025-10-09 10:31:51 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:51.687448 | orchestrator | 2025-10-09 10:31:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:54.736706 | orchestrator | 2025-10-09 10:31:54 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:54.738809 | orchestrator | 2025-10-09 10:31:54 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:54.740012 | orchestrator | 2025-10-09 10:31:54 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:54.740038 | orchestrator | 2025-10-09 10:31:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:57.794013 | orchestrator | 2025-10-09 10:31:57 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:31:57.796705 | orchestrator | 2025-10-09 10:31:57 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:31:57.798755 | orchestrator | 2025-10-09 10:31:57 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:31:57.798782 | orchestrator | 2025-10-09 10:31:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:00.842439 | orchestrator | 2025-10-09 10:32:00 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:00.843121 | orchestrator | 2025-10-09 10:32:00 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:00.845621 | orchestrator | 2025-10-09 10:32:00 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:32:00.846384 | orchestrator | 2025-10-09 10:32:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:03.899171 | orchestrator | 2025-10-09 10:32:03 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:03.901248 | orchestrator | 2025-10-09 10:32:03 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:03.903932 | orchestrator | 2025-10-09 10:32:03 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:32:03.904606 | orchestrator | 2025-10-09 10:32:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:06.951360 | orchestrator | 2025-10-09 10:32:06 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:06.952760 | orchestrator | 2025-10-09 10:32:06 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:06.954699 | orchestrator | 2025-10-09 10:32:06 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state STARTED 2025-10-09 10:32:06.954725 | orchestrator | 2025-10-09 10:32:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:10.000119 | orchestrator | 2025-10-09 10:32:09 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:10.004272 | orchestrator | 2025-10-09 10:32:10 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:10.007756 | orchestrator | 2025-10-09 10:32:10 | INFO  | Task 1096fc0a-4a94-4710-8ff5-19a1c973fc7d is in state SUCCESS 2025-10-09 10:32:10.008856 | orchestrator | 2025-10-09 10:32:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:10.011278 | orchestrator | 2025-10-09 10:32:10.011313 | orchestrator | 2025-10-09 10:32:10.011325 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:32:10.011337 | orchestrator | 2025-10-09 10:32:10.011348 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:32:10.011360 | orchestrator | Thursday 09 October 2025 10:29:11 +0000 (0:00:00.269) 0:00:00.269 ****** 2025-10-09 10:32:10.011371 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:10.011383 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:10.011394 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:10.011405 | orchestrator | 2025-10-09 10:32:10.011416 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:32:10.011427 | orchestrator | Thursday 09 October 2025 10:29:11 +0000 (0:00:00.361) 0:00:00.630 ****** 2025-10-09 10:32:10.011438 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-10-09 10:32:10.011449 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-10-09 10:32:10.011460 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-10-09 10:32:10.011471 | orchestrator | 2025-10-09 10:32:10.011482 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-10-09 10:32:10.011493 | orchestrator | 2025-10-09 10:32:10.011503 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-09 10:32:10.011514 | orchestrator | Thursday 09 October 2025 10:29:12 +0000 (0:00:00.440) 0:00:01.071 ****** 2025-10-09 10:32:10.011525 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:10.011536 | orchestrator | 2025-10-09 10:32:10.011548 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-10-09 10:32:10.011559 | orchestrator | Thursday 09 October 2025 10:29:12 +0000 (0:00:00.545) 0:00:01.616 ****** 2025-10-09 10:32:10.011570 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:32:10.011581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:32:10.011592 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:32:10.011602 | orchestrator | 2025-10-09 10:32:10.011613 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-10-09 10:32:10.011642 | orchestrator | Thursday 09 October 2025 10:29:13 +0000 (0:00:00.690) 0:00:02.307 ****** 2025-10-09 10:32:10.011657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.011696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.011722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.011737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.011756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.011779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.011791 | orchestrator | 2025-10-09 10:32:10.011802 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-09 10:32:10.011813 | orchestrator | Thursday 09 October 2025 10:29:15 +0000 (0:00:01.906) 0:00:04.214 ****** 2025-10-09 10:32:10.011824 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:10.011835 | orchestrator | 2025-10-09 10:32:10.011846 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-10-09 10:32:10.011857 | orchestrator | Thursday 09 October 2025 10:29:15 +0000 (0:00:00.576) 0:00:04.791 ****** 2025-10-09 10:32:10.011877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.011890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.011909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.011930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.011951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.011965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.011979 | orchestrator | 2025-10-09 10:32:10.011992 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-10-09 10:32:10.012004 | orchestrator | Thursday 09 October 2025 10:29:18 +0000 (0:00:02.713) 0:00:07.504 ****** 2025-10-09 10:32:10.012028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:32:10.012042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:32:10.012055 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:10.012075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:32:10.012089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:32:10.012103 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:10.012127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:32:10.012141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:32:10.012154 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:10.012166 | orchestrator | 2025-10-09 10:32:10.012179 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-10-09 10:32:10.012191 | orchestrator | Thursday 09 October 2025 10:29:19 +0000 (0:00:00.872) 0:00:08.377 ****** 2025-10-09 10:32:10.012248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:32:10.012265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:32:10.012288 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:10.012305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:32:10.012317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:32:10.012329 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:10.012346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:32:10.012358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:32:10.012376 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:10.012387 | orchestrator | 2025-10-09 10:32:10.012398 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-10-09 10:32:10.012409 | orchestrator | Thursday 09 October 2025 10:29:20 +0000 (0:00:01.164) 0:00:09.541 ****** 2025-10-09 10:32:10.012426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.012438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.012449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.012468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.012498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.012511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.012523 | orchestrator | 2025-10-09 10:32:10.012534 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-10-09 10:32:10.012545 | orchestrator | Thursday 09 October 2025 10:29:23 +0000 (0:00:02.449) 0:00:11.990 ****** 2025-10-09 10:32:10.012556 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:10.012567 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:10.012578 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:10.012588 | orchestrator | 2025-10-09 10:32:10.012599 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-10-09 10:32:10.012610 | orchestrator | Thursday 09 October 2025 10:29:26 +0000 (0:00:03.130) 0:00:15.121 ****** 2025-10-09 10:32:10.012621 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:10.012632 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:10.012642 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:10.012653 | orchestrator | 2025-10-09 10:32:10.012664 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-10-09 10:32:10.012675 | orchestrator | Thursday 09 October 2025 10:29:28 +0000 (0:00:02.058) 0:00:17.179 ****** 2025-10-09 10:32:10.012695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.012713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.012730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.3.20250928', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:32:10.012742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.012761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.012779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:32:10.012791 | orchestrator | 2025-10-09 10:32:10.012802 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-09 10:32:10.012813 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:02.019) 0:00:19.198 ****** 2025-10-09 10:32:10.012824 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:10.012835 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:10.012846 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:10.012857 | orchestrator | 2025-10-09 10:32:10.012868 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-09 10:32:10.012883 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:00.384) 0:00:19.583 ****** 2025-10-09 10:32:10.012894 | orchestrator | 2025-10-09 10:32:10.012904 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-09 10:32:10.012915 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:00.085) 0:00:19.668 ****** 2025-10-09 10:32:10.012926 | orchestrator | 2025-10-09 10:32:10.012937 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-09 10:32:10.012948 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:00.066) 0:00:19.735 ****** 2025-10-09 10:32:10.012958 | orchestrator | 2025-10-09 10:32:10.012969 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-10-09 10:32:10.012980 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:00.063) 0:00:19.798 ****** 2025-10-09 10:32:10.012990 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:10.013001 | orchestrator | 2025-10-09 10:32:10.013012 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-10-09 10:32:10.013023 | orchestrator | Thursday 09 October 2025 10:29:31 +0000 (0:00:00.235) 0:00:20.033 ****** 2025-10-09 10:32:10.013033 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:10.013044 | orchestrator | 2025-10-09 10:32:10.013055 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-10-09 10:32:10.013065 | orchestrator | Thursday 09 October 2025 10:29:31 +0000 (0:00:00.710) 0:00:20.744 ****** 2025-10-09 10:32:10.013076 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:10.013087 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:10.013098 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:10.013109 | orchestrator | 2025-10-09 10:32:10.013120 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-10-09 10:32:10.013130 | orchestrator | Thursday 09 October 2025 10:30:35 +0000 (0:01:03.768) 0:01:24.513 ****** 2025-10-09 10:32:10.013141 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:10.013152 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:10.013163 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:10.013173 | orchestrator | 2025-10-09 10:32:10.013184 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-09 10:32:10.013202 | orchestrator | Thursday 09 October 2025 10:31:56 +0000 (0:01:20.829) 0:02:45.342 ****** 2025-10-09 10:32:10.013230 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:10.013241 | orchestrator | 2025-10-09 10:32:10.013253 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-10-09 10:32:10.013264 | orchestrator | Thursday 09 October 2025 10:31:57 +0000 (0:00:00.745) 0:02:46.088 ****** 2025-10-09 10:32:10.013274 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:10.013285 | orchestrator | 2025-10-09 10:32:10.013296 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-10-09 10:32:10.013307 | orchestrator | Thursday 09 October 2025 10:31:59 +0000 (0:00:02.490) 0:02:48.578 ****** 2025-10-09 10:32:10.013318 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:10.013329 | orchestrator | 2025-10-09 10:32:10.013340 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-10-09 10:32:10.013351 | orchestrator | Thursday 09 October 2025 10:32:01 +0000 (0:00:02.342) 0:02:50.921 ****** 2025-10-09 10:32:10.013362 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:10.013372 | orchestrator | 2025-10-09 10:32:10.013383 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-10-09 10:32:10.013394 | orchestrator | Thursday 09 October 2025 10:32:04 +0000 (0:00:02.801) 0:02:53.722 ****** 2025-10-09 10:32:10.013405 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:10.013416 | orchestrator | 2025-10-09 10:32:10.013433 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:32:10.013445 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:32:10.013457 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:32:10.013468 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:32:10.013479 | orchestrator | 2025-10-09 10:32:10.013490 | orchestrator | 2025-10-09 10:32:10.013501 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:32:10.013512 | orchestrator | Thursday 09 October 2025 10:32:07 +0000 (0:00:02.601) 0:02:56.323 ****** 2025-10-09 10:32:10.013523 | orchestrator | =============================================================================== 2025-10-09 10:32:10.013534 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.83s 2025-10-09 10:32:10.013545 | orchestrator | opensearch : Restart opensearch container ------------------------------ 63.77s 2025-10-09 10:32:10.013555 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.13s 2025-10-09 10:32:10.013566 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.80s 2025-10-09 10:32:10.013577 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.71s 2025-10-09 10:32:10.013588 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.60s 2025-10-09 10:32:10.013599 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.49s 2025-10-09 10:32:10.013610 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.45s 2025-10-09 10:32:10.013621 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.34s 2025-10-09 10:32:10.013631 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.06s 2025-10-09 10:32:10.013647 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.02s 2025-10-09 10:32:10.013658 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.91s 2025-10-09 10:32:10.013669 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.16s 2025-10-09 10:32:10.013680 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.87s 2025-10-09 10:32:10.013697 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2025-10-09 10:32:10.013708 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.71s 2025-10-09 10:32:10.013719 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2025-10-09 10:32:10.013729 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-10-09 10:32:10.013740 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-10-09 10:32:10.013751 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-10-09 10:32:13.059199 | orchestrator | 2025-10-09 10:32:13 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:13.060094 | orchestrator | 2025-10-09 10:32:13 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:13.060624 | orchestrator | 2025-10-09 10:32:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:16.118013 | orchestrator | 2025-10-09 10:32:16 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:16.120299 | orchestrator | 2025-10-09 10:32:16 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:16.120326 | orchestrator | 2025-10-09 10:32:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:19.161410 | orchestrator | 2025-10-09 10:32:19 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:19.162572 | orchestrator | 2025-10-09 10:32:19 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:19.162605 | orchestrator | 2025-10-09 10:32:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:22.212889 | orchestrator | 2025-10-09 10:32:22 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:22.214824 | orchestrator | 2025-10-09 10:32:22 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:22.214944 | orchestrator | 2025-10-09 10:32:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:25.256370 | orchestrator | 2025-10-09 10:32:25 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:25.257640 | orchestrator | 2025-10-09 10:32:25 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:25.258164 | orchestrator | 2025-10-09 10:32:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:28.292041 | orchestrator | 2025-10-09 10:32:28 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state STARTED 2025-10-09 10:32:28.293036 | orchestrator | 2025-10-09 10:32:28 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:28.293255 | orchestrator | 2025-10-09 10:32:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:31.345954 | orchestrator | 2025-10-09 10:32:31 | INFO  | Task 7d8e77c7-a093-4b4b-ab12-19eb53341dbe is in state SUCCESS 2025-10-09 10:32:31.347049 | orchestrator | 2025-10-09 10:32:31.347088 | orchestrator | 2025-10-09 10:32:31.347101 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-10-09 10:32:31.347113 | orchestrator | 2025-10-09 10:32:31.347124 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-10-09 10:32:31.347135 | orchestrator | Thursday 09 October 2025 10:29:11 +0000 (0:00:00.110) 0:00:00.110 ****** 2025-10-09 10:32:31.347146 | orchestrator | ok: [localhost] => { 2025-10-09 10:32:31.347158 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-10-09 10:32:31.347169 | orchestrator | } 2025-10-09 10:32:31.347181 | orchestrator | 2025-10-09 10:32:31.347474 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-10-09 10:32:31.347493 | orchestrator | Thursday 09 October 2025 10:29:11 +0000 (0:00:00.061) 0:00:00.172 ****** 2025-10-09 10:32:31.347504 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-10-09 10:32:31.347516 | orchestrator | ...ignoring 2025-10-09 10:32:31.347527 | orchestrator | 2025-10-09 10:32:31.347538 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-10-09 10:32:31.347699 | orchestrator | Thursday 09 October 2025 10:29:14 +0000 (0:00:02.979) 0:00:03.151 ****** 2025-10-09 10:32:31.347723 | orchestrator | skipping: [localhost] 2025-10-09 10:32:31.347741 | orchestrator | 2025-10-09 10:32:31.347752 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-10-09 10:32:31.347763 | orchestrator | Thursday 09 October 2025 10:29:14 +0000 (0:00:00.069) 0:00:03.221 ****** 2025-10-09 10:32:31.347774 | orchestrator | ok: [localhost] 2025-10-09 10:32:31.347784 | orchestrator | 2025-10-09 10:32:31.347795 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:32:31.347806 | orchestrator | 2025-10-09 10:32:31.347833 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:32:31.347845 | orchestrator | Thursday 09 October 2025 10:29:14 +0000 (0:00:00.176) 0:00:03.397 ****** 2025-10-09 10:32:31.347855 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.347866 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:31.347877 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:31.347887 | orchestrator | 2025-10-09 10:32:31.347898 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:32:31.347909 | orchestrator | Thursday 09 October 2025 10:29:14 +0000 (0:00:00.348) 0:00:03.745 ****** 2025-10-09 10:32:31.347919 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-10-09 10:32:31.347930 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-10-09 10:32:31.347941 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-10-09 10:32:31.347952 | orchestrator | 2025-10-09 10:32:31.347962 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-10-09 10:32:31.347973 | orchestrator | 2025-10-09 10:32:31.347984 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-10-09 10:32:31.347995 | orchestrator | Thursday 09 October 2025 10:29:15 +0000 (0:00:00.630) 0:00:04.376 ****** 2025-10-09 10:32:31.348005 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:32:31.348016 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-09 10:32:31.348027 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-09 10:32:31.348037 | orchestrator | 2025-10-09 10:32:31.348048 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:32:31.348059 | orchestrator | Thursday 09 October 2025 10:29:15 +0000 (0:00:00.400) 0:00:04.776 ****** 2025-10-09 10:32:31.348070 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:31.348081 | orchestrator | 2025-10-09 10:32:31.348092 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-10-09 10:32:31.348103 | orchestrator | Thursday 09 October 2025 10:29:16 +0000 (0:00:00.545) 0:00:05.322 ****** 2025-10-09 10:32:31.348136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.348171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.348184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.348204 | orchestrator | 2025-10-09 10:32:31.348250 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-10-09 10:32:31.348262 | orchestrator | Thursday 09 October 2025 10:29:19 +0000 (0:00:02.935) 0:00:08.258 ****** 2025-10-09 10:32:31.348273 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.348284 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.348295 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.348308 | orchestrator | 2025-10-09 10:32:31.348320 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-10-09 10:32:31.348332 | orchestrator | Thursday 09 October 2025 10:29:20 +0000 (0:00:00.836) 0:00:09.094 ****** 2025-10-09 10:32:31.348344 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.348357 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.348368 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.348380 | orchestrator | 2025-10-09 10:32:31.348393 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-10-09 10:32:31.348406 | orchestrator | Thursday 09 October 2025 10:29:21 +0000 (0:00:01.528) 0:00:10.622 ****** 2025-10-09 10:32:31.348425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.348447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.348475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.348489 | orchestrator | 2025-10-09 10:32:31.348501 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-10-09 10:32:31.348513 | orchestrator | Thursday 09 October 2025 10:29:25 +0000 (0:00:04.153) 0:00:14.776 ****** 2025-10-09 10:32:31.348525 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.348538 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.348550 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.348562 | orchestrator | 2025-10-09 10:32:31.348574 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-10-09 10:32:31.348594 | orchestrator | Thursday 09 October 2025 10:29:27 +0000 (0:00:01.297) 0:00:16.074 ****** 2025-10-09 10:32:31.348605 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.348618 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:31.348630 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:31.348643 | orchestrator | 2025-10-09 10:32:31.348655 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:32:31.348665 | orchestrator | Thursday 09 October 2025 10:29:31 +0000 (0:00:04.580) 0:00:20.654 ****** 2025-10-09 10:32:31.348676 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:31.348687 | orchestrator | 2025-10-09 10:32:31.348698 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-10-09 10:32:31.348709 | orchestrator | Thursday 09 October 2025 10:29:32 +0000 (0:00:00.748) 0:00:21.403 ****** 2025-10-09 10:32:31.348730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.348742 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.348759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.348777 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.348797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.348809 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.348820 | orchestrator | 2025-10-09 10:32:31.348831 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-10-09 10:32:31.348841 | orchestrator | Thursday 09 October 2025 10:29:36 +0000 (0:00:03.794) 0:00:25.198 ****** 2025-10-09 10:32:31.348858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.348877 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.348895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.348908 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.348924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.348948 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.348959 | orchestrator | 2025-10-09 10:32:31.348970 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-10-09 10:32:31.348981 | orchestrator | Thursday 09 October 2025 10:29:40 +0000 (0:00:04.001) 0:00:29.200 ****** 2025-10-09 10:32:31.348998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.349010 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.349027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.349045 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.349057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:32:31.349069 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.349080 | orchestrator | 2025-10-09 10:32:31.349090 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-10-09 10:32:31.349101 | orchestrator | Thursday 09 October 2025 10:29:43 +0000 (0:00:02.975) 0:00:32.175 ****** 2025-10-09 10:32:31.349125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.349150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.349182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:32:31.349201 | orchestrator | 2025-10-09 10:32:31.349245 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-10-09 10:32:31.349256 | orchestrator | Thursday 09 October 2025 10:29:47 +0000 (0:00:04.276) 0:00:36.452 ****** 2025-10-09 10:32:31.349267 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.349278 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:31.349289 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:31.349300 | orchestrator | 2025-10-09 10:32:31.349311 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-10-09 10:32:31.349322 | orchestrator | Thursday 09 October 2025 10:29:48 +0000 (0:00:00.919) 0:00:37.371 ****** 2025-10-09 10:32:31.349333 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.349344 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:31.349355 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:31.349366 | orchestrator | 2025-10-09 10:32:31.349377 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-10-09 10:32:31.349387 | orchestrator | Thursday 09 October 2025 10:29:48 +0000 (0:00:00.486) 0:00:37.857 ****** 2025-10-09 10:32:31.349398 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.349409 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:31.349420 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:31.349431 | orchestrator | 2025-10-09 10:32:31.349441 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-10-09 10:32:31.349452 | orchestrator | Thursday 09 October 2025 10:29:49 +0000 (0:00:00.378) 0:00:38.236 ****** 2025-10-09 10:32:31.349464 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-10-09 10:32:31.349475 | orchestrator | ...ignoring 2025-10-09 10:32:31.349486 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-10-09 10:32:31.349497 | orchestrator | ...ignoring 2025-10-09 10:32:31.349508 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-10-09 10:32:31.349519 | orchestrator | ...ignoring 2025-10-09 10:32:31.349530 | orchestrator | 2025-10-09 10:32:31.349540 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-10-09 10:32:31.349551 | orchestrator | Thursday 09 October 2025 10:30:00 +0000 (0:00:10.864) 0:00:49.101 ****** 2025-10-09 10:32:31.349562 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.349573 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:31.349584 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:31.349595 | orchestrator | 2025-10-09 10:32:31.349606 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-10-09 10:32:31.349617 | orchestrator | Thursday 09 October 2025 10:30:00 +0000 (0:00:00.456) 0:00:49.558 ****** 2025-10-09 10:32:31.349630 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.349648 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.349659 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.349670 | orchestrator | 2025-10-09 10:32:31.349687 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-10-09 10:32:31.349701 | orchestrator | Thursday 09 October 2025 10:30:01 +0000 (0:00:00.790) 0:00:50.349 ****** 2025-10-09 10:32:31.349712 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.349723 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.349733 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.349744 | orchestrator | 2025-10-09 10:32:31.349755 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-10-09 10:32:31.349766 | orchestrator | Thursday 09 October 2025 10:30:01 +0000 (0:00:00.515) 0:00:50.864 ****** 2025-10-09 10:32:31.349776 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.349788 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.349805 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.349816 | orchestrator | 2025-10-09 10:32:31.349827 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-10-09 10:32:31.349844 | orchestrator | Thursday 09 October 2025 10:30:02 +0000 (0:00:00.489) 0:00:51.354 ****** 2025-10-09 10:32:31.349855 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.349866 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:31.349877 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:31.349887 | orchestrator | 2025-10-09 10:32:31.349898 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-10-09 10:32:31.349909 | orchestrator | Thursday 09 October 2025 10:30:02 +0000 (0:00:00.502) 0:00:51.857 ****** 2025-10-09 10:32:31.349920 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.349930 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.349941 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.349952 | orchestrator | 2025-10-09 10:32:31.349963 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:32:31.349974 | orchestrator | Thursday 09 October 2025 10:30:03 +0000 (0:00:00.776) 0:00:52.633 ****** 2025-10-09 10:32:31.349985 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.349996 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.350006 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-10-09 10:32:31.350066 | orchestrator | 2025-10-09 10:32:31.350086 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-10-09 10:32:31.350097 | orchestrator | Thursday 09 October 2025 10:30:04 +0000 (0:00:00.401) 0:00:53.034 ****** 2025-10-09 10:32:31.350108 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.350118 | orchestrator | 2025-10-09 10:32:31.350129 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-10-09 10:32:31.350140 | orchestrator | Thursday 09 October 2025 10:30:15 +0000 (0:00:11.154) 0:01:04.188 ****** 2025-10-09 10:32:31.350151 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.350161 | orchestrator | 2025-10-09 10:32:31.350178 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:32:31.350189 | orchestrator | Thursday 09 October 2025 10:30:15 +0000 (0:00:00.153) 0:01:04.342 ****** 2025-10-09 10:32:31.350200 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.350274 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.350287 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.350297 | orchestrator | 2025-10-09 10:32:31.350308 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-10-09 10:32:31.350319 | orchestrator | Thursday 09 October 2025 10:30:16 +0000 (0:00:01.064) 0:01:05.406 ****** 2025-10-09 10:32:31.350330 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.350340 | orchestrator | 2025-10-09 10:32:31.350351 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-10-09 10:32:31.350362 | orchestrator | Thursday 09 October 2025 10:30:24 +0000 (0:00:08.119) 0:01:13.526 ****** 2025-10-09 10:32:31.350371 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.350381 | orchestrator | 2025-10-09 10:32:31.350391 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-10-09 10:32:31.350401 | orchestrator | Thursday 09 October 2025 10:30:26 +0000 (0:00:01.718) 0:01:15.244 ****** 2025-10-09 10:32:31.350410 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.350420 | orchestrator | 2025-10-09 10:32:31.350429 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-10-09 10:32:31.350439 | orchestrator | Thursday 09 October 2025 10:30:29 +0000 (0:00:02.666) 0:01:17.911 ****** 2025-10-09 10:32:31.350449 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.350458 | orchestrator | 2025-10-09 10:32:31.350468 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-10-09 10:32:31.350477 | orchestrator | Thursday 09 October 2025 10:30:29 +0000 (0:00:00.142) 0:01:18.053 ****** 2025-10-09 10:32:31.350487 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.350504 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.350514 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.350523 | orchestrator | 2025-10-09 10:32:31.350533 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-10-09 10:32:31.350543 | orchestrator | Thursday 09 October 2025 10:30:29 +0000 (0:00:00.320) 0:01:18.373 ****** 2025-10-09 10:32:31.350552 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.350562 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-10-09 10:32:31.350572 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:31.350581 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:31.350591 | orchestrator | 2025-10-09 10:32:31.350600 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-10-09 10:32:31.350610 | orchestrator | skipping: no hosts matched 2025-10-09 10:32:31.350619 | orchestrator | 2025-10-09 10:32:31.350629 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-09 10:32:31.350638 | orchestrator | 2025-10-09 10:32:31.350648 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-09 10:32:31.350658 | orchestrator | Thursday 09 October 2025 10:30:30 +0000 (0:00:00.603) 0:01:18.977 ****** 2025-10-09 10:32:31.350667 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:31.350677 | orchestrator | 2025-10-09 10:32:31.350686 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-09 10:32:31.350696 | orchestrator | Thursday 09 October 2025 10:30:49 +0000 (0:00:19.012) 0:01:37.989 ****** 2025-10-09 10:32:31.350705 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:31.350715 | orchestrator | 2025-10-09 10:32:31.350724 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-09 10:32:31.350734 | orchestrator | Thursday 09 October 2025 10:31:09 +0000 (0:00:20.617) 0:01:58.607 ****** 2025-10-09 10:32:31.350743 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:31.350753 | orchestrator | 2025-10-09 10:32:31.350762 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-09 10:32:31.350771 | orchestrator | 2025-10-09 10:32:31.350781 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-09 10:32:31.350790 | orchestrator | Thursday 09 October 2025 10:31:12 +0000 (0:00:02.544) 0:02:01.152 ****** 2025-10-09 10:32:31.350800 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:31.350809 | orchestrator | 2025-10-09 10:32:31.350819 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-09 10:32:31.350836 | orchestrator | Thursday 09 October 2025 10:31:31 +0000 (0:00:18.966) 0:02:20.118 ****** 2025-10-09 10:32:31.350846 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:31.350855 | orchestrator | 2025-10-09 10:32:31.350865 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-09 10:32:31.350874 | orchestrator | Thursday 09 October 2025 10:31:51 +0000 (0:00:20.636) 0:02:40.755 ****** 2025-10-09 10:32:31.350884 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:31.350893 | orchestrator | 2025-10-09 10:32:31.350902 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-10-09 10:32:31.350912 | orchestrator | 2025-10-09 10:32:31.350921 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-09 10:32:31.350931 | orchestrator | Thursday 09 October 2025 10:31:54 +0000 (0:00:02.706) 0:02:43.462 ****** 2025-10-09 10:32:31.350941 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.350950 | orchestrator | 2025-10-09 10:32:31.350960 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-09 10:32:31.350969 | orchestrator | Thursday 09 October 2025 10:32:07 +0000 (0:00:12.973) 0:02:56.435 ****** 2025-10-09 10:32:31.350979 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.350988 | orchestrator | 2025-10-09 10:32:31.350998 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-09 10:32:31.351008 | orchestrator | Thursday 09 October 2025 10:32:13 +0000 (0:00:05.591) 0:03:02.027 ****** 2025-10-09 10:32:31.351023 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.351032 | orchestrator | 2025-10-09 10:32:31.351042 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-10-09 10:32:31.351051 | orchestrator | 2025-10-09 10:32:31.351061 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-10-09 10:32:31.351071 | orchestrator | Thursday 09 October 2025 10:32:15 +0000 (0:00:02.845) 0:03:04.872 ****** 2025-10-09 10:32:31.351085 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:31.351095 | orchestrator | 2025-10-09 10:32:31.351104 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-10-09 10:32:31.351114 | orchestrator | Thursday 09 October 2025 10:32:16 +0000 (0:00:00.614) 0:03:05.487 ****** 2025-10-09 10:32:31.351123 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.351133 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.351142 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.351152 | orchestrator | 2025-10-09 10:32:31.351161 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-10-09 10:32:31.351171 | orchestrator | Thursday 09 October 2025 10:32:19 +0000 (0:00:02.564) 0:03:08.051 ****** 2025-10-09 10:32:31.351181 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.351198 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.351234 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.351250 | orchestrator | 2025-10-09 10:32:31.351264 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-10-09 10:32:31.351280 | orchestrator | Thursday 09 October 2025 10:32:21 +0000 (0:00:02.484) 0:03:10.536 ****** 2025-10-09 10:32:31.351295 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.351312 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.351330 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.351349 | orchestrator | 2025-10-09 10:32:31.351366 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-10-09 10:32:31.351383 | orchestrator | Thursday 09 October 2025 10:32:23 +0000 (0:00:02.317) 0:03:12.853 ****** 2025-10-09 10:32:31.351399 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.351409 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.351419 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:31.351428 | orchestrator | 2025-10-09 10:32:31.351438 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-10-09 10:32:31.351447 | orchestrator | Thursday 09 October 2025 10:32:26 +0000 (0:00:02.479) 0:03:15.333 ****** 2025-10-09 10:32:31.351457 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:31.351466 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:31.351476 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:31.351485 | orchestrator | 2025-10-09 10:32:31.351495 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-10-09 10:32:31.351504 | orchestrator | Thursday 09 October 2025 10:32:29 +0000 (0:00:03.378) 0:03:18.712 ****** 2025-10-09 10:32:31.351514 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:31.351523 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:31.351533 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:31.351543 | orchestrator | 2025-10-09 10:32:31.351552 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:32:31.351562 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-10-09 10:32:31.351573 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-10-09 10:32:31.351585 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-10-09 10:32:31.351594 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-10-09 10:32:31.351613 | orchestrator | 2025-10-09 10:32:31.351623 | orchestrator | 2025-10-09 10:32:31.351633 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:32:31.351642 | orchestrator | Thursday 09 October 2025 10:32:30 +0000 (0:00:00.248) 0:03:18.960 ****** 2025-10-09 10:32:31.351652 | orchestrator | =============================================================================== 2025-10-09 10:32:31.351661 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.25s 2025-10-09 10:32:31.351671 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.98s 2025-10-09 10:32:31.351687 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.97s 2025-10-09 10:32:31.351697 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.15s 2025-10-09 10:32:31.351707 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2025-10-09 10:32:31.351716 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.12s 2025-10-09 10:32:31.351726 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.59s 2025-10-09 10:32:31.351735 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.25s 2025-10-09 10:32:31.351745 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.58s 2025-10-09 10:32:31.351754 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.28s 2025-10-09 10:32:31.351764 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.15s 2025-10-09 10:32:31.351773 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.00s 2025-10-09 10:32:31.351782 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.79s 2025-10-09 10:32:31.351792 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.38s 2025-10-09 10:32:31.351801 | orchestrator | Check MariaDB service --------------------------------------------------- 2.98s 2025-10-09 10:32:31.351811 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.98s 2025-10-09 10:32:31.351820 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.94s 2025-10-09 10:32:31.351835 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.85s 2025-10-09 10:32:31.351845 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.67s 2025-10-09 10:32:31.351854 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.56s 2025-10-09 10:32:31.351864 | orchestrator | 2025-10-09 10:32:31 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:31.351873 | orchestrator | 2025-10-09 10:32:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:34.395300 | orchestrator | 2025-10-09 10:32:34 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:34.396109 | orchestrator | 2025-10-09 10:32:34 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:34.397275 | orchestrator | 2025-10-09 10:32:34 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:34.397294 | orchestrator | 2025-10-09 10:32:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:37.432762 | orchestrator | 2025-10-09 10:32:37 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:37.437025 | orchestrator | 2025-10-09 10:32:37 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:37.441000 | orchestrator | 2025-10-09 10:32:37 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:37.441036 | orchestrator | 2025-10-09 10:32:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:40.482109 | orchestrator | 2025-10-09 10:32:40 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:40.482623 | orchestrator | 2025-10-09 10:32:40 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:40.483732 | orchestrator | 2025-10-09 10:32:40 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:40.484123 | orchestrator | 2025-10-09 10:32:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:43.533160 | orchestrator | 2025-10-09 10:32:43 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:43.537606 | orchestrator | 2025-10-09 10:32:43 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:43.539377 | orchestrator | 2025-10-09 10:32:43 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:43.539403 | orchestrator | 2025-10-09 10:32:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:46.571947 | orchestrator | 2025-10-09 10:32:46 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:46.572982 | orchestrator | 2025-10-09 10:32:46 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:46.574484 | orchestrator | 2025-10-09 10:32:46 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:46.574741 | orchestrator | 2025-10-09 10:32:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:49.616495 | orchestrator | 2025-10-09 10:32:49 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:49.617953 | orchestrator | 2025-10-09 10:32:49 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:49.619274 | orchestrator | 2025-10-09 10:32:49 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:49.619563 | orchestrator | 2025-10-09 10:32:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:52.692715 | orchestrator | 2025-10-09 10:32:52 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:52.692843 | orchestrator | 2025-10-09 10:32:52 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:52.693953 | orchestrator | 2025-10-09 10:32:52 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:52.693991 | orchestrator | 2025-10-09 10:32:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:55.730192 | orchestrator | 2025-10-09 10:32:55 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:55.730342 | orchestrator | 2025-10-09 10:32:55 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:55.731161 | orchestrator | 2025-10-09 10:32:55 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:55.731184 | orchestrator | 2025-10-09 10:32:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:58.769855 | orchestrator | 2025-10-09 10:32:58 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:32:58.770169 | orchestrator | 2025-10-09 10:32:58 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:32:58.771052 | orchestrator | 2025-10-09 10:32:58 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:32:58.771086 | orchestrator | 2025-10-09 10:32:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:01.810779 | orchestrator | 2025-10-09 10:33:01 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:01.812492 | orchestrator | 2025-10-09 10:33:01 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:33:01.813448 | orchestrator | 2025-10-09 10:33:01 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:01.813477 | orchestrator | 2025-10-09 10:33:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:04.864535 | orchestrator | 2025-10-09 10:33:04 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:04.866285 | orchestrator | 2025-10-09 10:33:04 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:33:04.867856 | orchestrator | 2025-10-09 10:33:04 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:04.868317 | orchestrator | 2025-10-09 10:33:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:07.910566 | orchestrator | 2025-10-09 10:33:07 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:07.912301 | orchestrator | 2025-10-09 10:33:07 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:33:07.914492 | orchestrator | 2025-10-09 10:33:07 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:07.914522 | orchestrator | 2025-10-09 10:33:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:10.956629 | orchestrator | 2025-10-09 10:33:10 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:10.959028 | orchestrator | 2025-10-09 10:33:10 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:33:10.962092 | orchestrator | 2025-10-09 10:33:10 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:10.962602 | orchestrator | 2025-10-09 10:33:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:14.011864 | orchestrator | 2025-10-09 10:33:14 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:14.013148 | orchestrator | 2025-10-09 10:33:14 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state STARTED 2025-10-09 10:33:14.015567 | orchestrator | 2025-10-09 10:33:14 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:14.015613 | orchestrator | 2025-10-09 10:33:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:17.069803 | orchestrator | 2025-10-09 10:33:17 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:17.073468 | orchestrator | 2025-10-09 10:33:17 | INFO  | Task 3ec9c8b0-b370-44de-b304-b35c52fd65b6 is in state SUCCESS 2025-10-09 10:33:17.076413 | orchestrator | 2025-10-09 10:33:17.076709 | orchestrator | 2025-10-09 10:33:17.076726 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-10-09 10:33:17.076739 | orchestrator | 2025-10-09 10:33:17.076750 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-10-09 10:33:17.076762 | orchestrator | Thursday 09 October 2025 10:31:05 +0000 (0:00:00.743) 0:00:00.743 ****** 2025-10-09 10:33:17.076774 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:33:17.076786 | orchestrator | 2025-10-09 10:33:17.076798 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-10-09 10:33:17.076809 | orchestrator | Thursday 09 October 2025 10:31:06 +0000 (0:00:00.782) 0:00:01.526 ****** 2025-10-09 10:33:17.076820 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.076833 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.076845 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.076856 | orchestrator | 2025-10-09 10:33:17.076895 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-10-09 10:33:17.076907 | orchestrator | Thursday 09 October 2025 10:31:06 +0000 (0:00:00.778) 0:00:02.304 ****** 2025-10-09 10:33:17.076918 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.076928 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.076939 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.076950 | orchestrator | 2025-10-09 10:33:17.076961 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-10-09 10:33:17.076972 | orchestrator | Thursday 09 October 2025 10:31:07 +0000 (0:00:00.331) 0:00:02.635 ****** 2025-10-09 10:33:17.076983 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.076993 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.077004 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.077016 | orchestrator | 2025-10-09 10:33:17.077046 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-10-09 10:33:17.077058 | orchestrator | Thursday 09 October 2025 10:31:08 +0000 (0:00:00.893) 0:00:03.529 ****** 2025-10-09 10:33:17.077069 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.077080 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.077090 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.077101 | orchestrator | 2025-10-09 10:33:17.077112 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-10-09 10:33:17.077123 | orchestrator | Thursday 09 October 2025 10:31:08 +0000 (0:00:00.338) 0:00:03.868 ****** 2025-10-09 10:33:17.077134 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.077145 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.077156 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.077166 | orchestrator | 2025-10-09 10:33:17.077177 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-10-09 10:33:17.077188 | orchestrator | Thursday 09 October 2025 10:31:08 +0000 (0:00:00.341) 0:00:04.209 ****** 2025-10-09 10:33:17.077199 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.077245 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.077256 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.077267 | orchestrator | 2025-10-09 10:33:17.077278 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-10-09 10:33:17.077289 | orchestrator | Thursday 09 October 2025 10:31:09 +0000 (0:00:00.372) 0:00:04.581 ****** 2025-10-09 10:33:17.077300 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.077312 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.077323 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.077334 | orchestrator | 2025-10-09 10:33:17.077345 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-10-09 10:33:17.077356 | orchestrator | Thursday 09 October 2025 10:31:09 +0000 (0:00:00.543) 0:00:05.124 ****** 2025-10-09 10:33:17.077367 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.077378 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.077388 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.077399 | orchestrator | 2025-10-09 10:33:17.077410 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-10-09 10:33:17.077421 | orchestrator | Thursday 09 October 2025 10:31:10 +0000 (0:00:00.321) 0:00:05.446 ****** 2025-10-09 10:33:17.077432 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:33:17.077443 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:33:17.077454 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:33:17.077465 | orchestrator | 2025-10-09 10:33:17.077476 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-10-09 10:33:17.077486 | orchestrator | Thursday 09 October 2025 10:31:10 +0000 (0:00:00.739) 0:00:06.186 ****** 2025-10-09 10:33:17.077497 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.077508 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.077519 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.077530 | orchestrator | 2025-10-09 10:33:17.077551 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-10-09 10:33:17.077562 | orchestrator | Thursday 09 October 2025 10:31:11 +0000 (0:00:00.453) 0:00:06.639 ****** 2025-10-09 10:33:17.077573 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:33:17.077583 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:33:17.077594 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:33:17.077605 | orchestrator | 2025-10-09 10:33:17.077616 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-10-09 10:33:17.077627 | orchestrator | Thursday 09 October 2025 10:31:13 +0000 (0:00:02.241) 0:00:08.881 ****** 2025-10-09 10:33:17.077638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-09 10:33:17.077650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-09 10:33:17.077661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-09 10:33:17.077672 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.077683 | orchestrator | 2025-10-09 10:33:17.077694 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-10-09 10:33:17.077715 | orchestrator | Thursday 09 October 2025 10:31:13 +0000 (0:00:00.430) 0:00:09.311 ****** 2025-10-09 10:33:17.077729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.077744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.077755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.077766 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.077777 | orchestrator | 2025-10-09 10:33:17.077788 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-10-09 10:33:17.077799 | orchestrator | Thursday 09 October 2025 10:31:14 +0000 (0:00:00.841) 0:00:10.152 ****** 2025-10-09 10:33:17.077820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.077835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.077847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.077858 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.077869 | orchestrator | 2025-10-09 10:33:17.077880 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-10-09 10:33:17.077899 | orchestrator | Thursday 09 October 2025 10:31:14 +0000 (0:00:00.175) 0:00:10.328 ****** 2025-10-09 10:33:17.077912 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '98ad3cead297', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-10-09 10:31:11.946537', 'end': '2025-10-09 10:31:12.011181', 'delta': '0:00:00.064644', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['98ad3cead297'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-10-09 10:33:17.077928 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e45eba6631c4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-10-09 10:31:12.775485', 'end': '2025-10-09 10:31:12.833970', 'delta': '0:00:00.058485', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e45eba6631c4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-10-09 10:33:17.077950 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'af3ea0f1cff1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-10-09 10:31:13.329403', 'end': '2025-10-09 10:31:13.378083', 'delta': '0:00:00.048680', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['af3ea0f1cff1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-10-09 10:33:17.077962 | orchestrator | 2025-10-09 10:33:17.077973 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-10-09 10:33:17.077984 | orchestrator | Thursday 09 October 2025 10:31:15 +0000 (0:00:00.445) 0:00:10.773 ****** 2025-10-09 10:33:17.077995 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.078006 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.078061 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.078076 | orchestrator | 2025-10-09 10:33:17.078087 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-10-09 10:33:17.078099 | orchestrator | Thursday 09 October 2025 10:31:15 +0000 (0:00:00.453) 0:00:11.227 ****** 2025-10-09 10:33:17.078110 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-10-09 10:33:17.078121 | orchestrator | 2025-10-09 10:33:17.078138 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-10-09 10:33:17.078149 | orchestrator | Thursday 09 October 2025 10:31:17 +0000 (0:00:01.723) 0:00:12.951 ****** 2025-10-09 10:33:17.078160 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078171 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078182 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078193 | orchestrator | 2025-10-09 10:33:17.078220 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-10-09 10:33:17.078232 | orchestrator | Thursday 09 October 2025 10:31:17 +0000 (0:00:00.324) 0:00:13.275 ****** 2025-10-09 10:33:17.078242 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078253 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078272 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078283 | orchestrator | 2025-10-09 10:33:17.078294 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-09 10:33:17.078305 | orchestrator | Thursday 09 October 2025 10:31:18 +0000 (0:00:00.415) 0:00:13.690 ****** 2025-10-09 10:33:17.078316 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078327 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078338 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078349 | orchestrator | 2025-10-09 10:33:17.078360 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-10-09 10:33:17.078371 | orchestrator | Thursday 09 October 2025 10:31:18 +0000 (0:00:00.519) 0:00:14.210 ****** 2025-10-09 10:33:17.078381 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.078392 | orchestrator | 2025-10-09 10:33:17.078403 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-10-09 10:33:17.078414 | orchestrator | Thursday 09 October 2025 10:31:18 +0000 (0:00:00.135) 0:00:14.345 ****** 2025-10-09 10:33:17.078425 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078436 | orchestrator | 2025-10-09 10:33:17.078447 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-09 10:33:17.078458 | orchestrator | Thursday 09 October 2025 10:31:19 +0000 (0:00:00.251) 0:00:14.597 ****** 2025-10-09 10:33:17.078469 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078480 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078491 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078502 | orchestrator | 2025-10-09 10:33:17.078513 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-10-09 10:33:17.078523 | orchestrator | Thursday 09 October 2025 10:31:19 +0000 (0:00:00.319) 0:00:14.917 ****** 2025-10-09 10:33:17.078534 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078545 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078556 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078567 | orchestrator | 2025-10-09 10:33:17.078578 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-10-09 10:33:17.078589 | orchestrator | Thursday 09 October 2025 10:31:19 +0000 (0:00:00.318) 0:00:15.236 ****** 2025-10-09 10:33:17.078600 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078610 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078621 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078632 | orchestrator | 2025-10-09 10:33:17.078643 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-10-09 10:33:17.078654 | orchestrator | Thursday 09 October 2025 10:31:20 +0000 (0:00:00.590) 0:00:15.826 ****** 2025-10-09 10:33:17.078665 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078676 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078687 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078698 | orchestrator | 2025-10-09 10:33:17.078709 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-10-09 10:33:17.078720 | orchestrator | Thursday 09 October 2025 10:31:20 +0000 (0:00:00.353) 0:00:16.180 ****** 2025-10-09 10:33:17.078731 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078742 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078753 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078764 | orchestrator | 2025-10-09 10:33:17.078774 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-10-09 10:33:17.078785 | orchestrator | Thursday 09 October 2025 10:31:21 +0000 (0:00:00.357) 0:00:16.537 ****** 2025-10-09 10:33:17.078796 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078807 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078818 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078829 | orchestrator | 2025-10-09 10:33:17.078840 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-10-09 10:33:17.078859 | orchestrator | Thursday 09 October 2025 10:31:21 +0000 (0:00:00.350) 0:00:16.888 ****** 2025-10-09 10:33:17.078881 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.078892 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.078903 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.078914 | orchestrator | 2025-10-09 10:33:17.078925 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-10-09 10:33:17.078936 | orchestrator | Thursday 09 October 2025 10:31:22 +0000 (0:00:00.565) 0:00:17.454 ****** 2025-10-09 10:33:17.078949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54591ae8--508f--52be--a28f--431401b6d1ad-osd--block--54591ae8--508f--52be--a28f--431401b6d1ad', 'dm-uuid-LVM-EVsqi2xWuBrGH7ndZWt2fLOetE5tvdmnZAtuoviZdwNP6SNMqWPVWlqPBLzBSFKA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.078967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4e7e843--affd--596c--a641--84f7bc574f86-osd--block--b4e7e843--affd--596c--a641--84f7bc574f86', 'dm-uuid-LVM-K47ukKc3iSjQ5ctiCOYZ8X6UjrEi5wyygeSwvWYR8NJ9dB5ju47sNL755joY0PaZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.078979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.078991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--54591ae8--508f--52be--a28f--431401b6d1ad-osd--block--54591ae8--508f--52be--a28f--431401b6d1ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YNXba9-HJ0W-oMiN-czED-sHHj-nVGE-aLrrmj', 'scsi-0QEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16', 'scsi-SQEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b4e7e843--affd--596c--a641--84f7bc574f86-osd--block--b4e7e843--affd--596c--a641--84f7bc574f86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-haDIf7-Emsh-nQyd-5J3H-Zscp-VTbV-fvYdrM', 'scsi-0QEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057', 'scsi-SQEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2', 'scsi-SQEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef021fd3--cbd8--5c2d--a302--cb22746ffda4-osd--block--ef021fd3--cbd8--5c2d--a302--cb22746ffda4', 'dm-uuid-LVM-T6jYMBRL7dp9tkdsf0p0IlQUxDmrOpuqqEA9KWjQ2sMW5BlNPfyg7hM5mY97Cgfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079190 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.079202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dd550970--9f11--5339--ba56--252bcc7474b0-osd--block--dd550970--9f11--5339--ba56--252bcc7474b0', 'dm-uuid-LVM-klrAyY2R3EDbTRZMGtUluIGdB92UKAjKW2w66lfYmOvHKm44Ai0bKUegTQpG4d7O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--858e0050--0995--591a--8058--d3b9e3e91008-osd--block--858e0050--0995--591a--8058--d3b9e3e91008', 'dm-uuid-LVM-VIOu0bezJs3RTPoIDuH3unxVMbg1Fhi6gEIo2sRefuTOgjraOMKyPpDJ80xf1QI1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7a727a7b--5e51--5ff8--a4b5--d0215765e108-osd--block--7a727a7b--5e51--5ff8--a4b5--d0215765e108', 'dm-uuid-LVM-A9aLUKkNez0PszVsSh0FgEdmDyanvMJtkM1tAbeCrlF43Wkmx7BLo2d0HFHi7tUf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ef021fd3--cbd8--5c2d--a302--cb22746ffda4-osd--block--ef021fd3--cbd8--5c2d--a302--cb22746ffda4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9WSgIJ-I51A-wqaD-NOuA-6Xc0-6Vv6-Zpuk9u', 'scsi-0QEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b', 'scsi-SQEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dd550970--9f11--5339--ba56--252bcc7474b0-osd--block--dd550970--9f11--5339--ba56--252bcc7474b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u7NTW9-427L-fzc2-K2VT-yHeG-nl1k-MmJnGV', 'scsi-0QEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c', 'scsi-SQEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256', 'scsi-SQEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079541 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.079552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:33:17.079617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--858e0050--0995--591a--8058--d3b9e3e91008-osd--block--858e0050--0995--591a--8058--d3b9e3e91008'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-llnifo-6NnZ-JfLO-NBAB-djPH-cniD-jJCdVG', 'scsi-0QEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317', 'scsi-SQEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7a727a7b--5e51--5ff8--a4b5--d0215765e108-osd--block--7a727a7b--5e51--5ff8--a4b5--d0215765e108'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8FejOd-vDRE-VG31-PbtH-QBde-6z0E-hwqNI0', 'scsi-0QEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5', 'scsi-SQEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb', 'scsi-SQEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:33:17.079688 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.079699 | orchestrator | 2025-10-09 10:33:17.079710 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-10-09 10:33:17.079721 | orchestrator | Thursday 09 October 2025 10:31:22 +0000 (0:00:00.712) 0:00:18.167 ****** 2025-10-09 10:33:17.079733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54591ae8--508f--52be--a28f--431401b6d1ad-osd--block--54591ae8--508f--52be--a28f--431401b6d1ad', 'dm-uuid-LVM-EVsqi2xWuBrGH7ndZWt2fLOetE5tvdmnZAtuoviZdwNP6SNMqWPVWlqPBLzBSFKA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4e7e843--affd--596c--a641--84f7bc574f86-osd--block--b4e7e843--affd--596c--a641--84f7bc574f86', 'dm-uuid-LVM-K47ukKc3iSjQ5ctiCOYZ8X6UjrEi5wyygeSwvWYR8NJ9dB5ju47sNL755joY0PaZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef021fd3--cbd8--5c2d--a302--cb22746ffda4-osd--block--ef021fd3--cbd8--5c2d--a302--cb22746ffda4', 'dm-uuid-LVM-T6jYMBRL7dp9tkdsf0p0IlQUxDmrOpuqqEA9KWjQ2sMW5BlNPfyg7hM5mY97Cgfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5c50ba9-a3c1-41f3-9f8e-c3c7b571fa1f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dd550970--9f11--5339--ba56--252bcc7474b0-osd--block--dd550970--9f11--5339--ba56--252bcc7474b0', 'dm-uuid-LVM-klrAyY2R3EDbTRZMGtUluIGdB92UKAjKW2w66lfYmOvHKm44Ai0bKUegTQpG4d7O'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--54591ae8--508f--52be--a28f--431401b6d1ad-osd--block--54591ae8--508f--52be--a28f--431401b6d1ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YNXba9-HJ0W-oMiN-czED-sHHj-nVGE-aLrrmj', 'scsi-0QEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16', 'scsi-SQEMU_QEMU_HARDDISK_814c6d5d-dde1-4cc2-a34c-0e2957be6b16'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b4e7e843--affd--596c--a641--84f7bc574f86-osd--block--b4e7e843--affd--596c--a641--84f7bc574f86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-haDIf7-Emsh-nQyd-5J3H-Zscp-VTbV-fvYdrM', 'scsi-0QEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057', 'scsi-SQEMU_QEMU_HARDDISK_cb120539-e45f-4dc2-9fd5-4d9c62406057'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2', 'scsi-SQEMU_QEMU_HARDDISK_14000628-bc87-4bc6-bf7c-9844b85594c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.079997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080030 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080058 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080087 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080105 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080124 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4e0aebcf-e4e3-4bea-9a67-8744fd10874e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080137 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.080149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ef021fd3--cbd8--5c2d--a302--cb22746ffda4-osd--block--ef021fd3--cbd8--5c2d--a302--cb22746ffda4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9WSgIJ-I51A-wqaD-NOuA-6Xc0-6Vv6-Zpuk9u', 'scsi-0QEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b', 'scsi-SQEMU_QEMU_HARDDISK_6cf84d94-208a-47e1-8626-6f0f49ffb66b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dd550970--9f11--5339--ba56--252bcc7474b0-osd--block--dd550970--9f11--5339--ba56--252bcc7474b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u7NTW9-427L-fzc2-K2VT-yHeG-nl1k-MmJnGV', 'scsi-0QEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c', 'scsi-SQEMU_QEMU_HARDDISK_a18ef090-7db7-4160-bb62-4e2315b7c41c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256', 'scsi-SQEMU_QEMU_HARDDISK_79b0088c-29b1-4160-91bf-334de808a256'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080701 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.080753 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--858e0050--0995--591a--8058--d3b9e3e91008-osd--block--858e0050--0995--591a--8058--d3b9e3e91008', 'dm-uuid-LVM-VIOu0bezJs3RTPoIDuH3unxVMbg1Fhi6gEIo2sRefuTOgjraOMKyPpDJ80xf1QI1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080771 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7a727a7b--5e51--5ff8--a4b5--d0215765e108-osd--block--7a727a7b--5e51--5ff8--a4b5--d0215765e108', 'dm-uuid-LVM-A9aLUKkNez0PszVsSh0FgEdmDyanvMJtkM1tAbeCrlF43Wkmx7BLo2d0HFHi7tUf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080794 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080818 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080850 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080867 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d400245-ddcf-4a2b-b6fe-2c79382b1633-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080934 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--858e0050--0995--591a--8058--d3b9e3e91008-osd--block--858e0050--0995--591a--8058--d3b9e3e91008'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-llnifo-6NnZ-JfLO-NBAB-djPH-cniD-jJCdVG', 'scsi-0QEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317', 'scsi-SQEMU_QEMU_HARDDISK_aa506c68-74fd-4bd6-80c5-fdf748fe0317'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080954 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7a727a7b--5e51--5ff8--a4b5--d0215765e108-osd--block--7a727a7b--5e51--5ff8--a4b5--d0215765e108'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8FejOd-vDRE-VG31-PbtH-QBde-6z0E-hwqNI0', 'scsi-0QEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5', 'scsi-SQEMU_QEMU_HARDDISK_5ff7f971-6db7-40fa-8173-f07df227bea5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb', 'scsi-SQEMU_QEMU_HARDDISK_b8594df3-e172-4a8f-830d-865dfc79f7eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:33:17.080996 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.081007 | orchestrator | 2025-10-09 10:33:17.081041 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-10-09 10:33:17.081053 | orchestrator | Thursday 09 October 2025 10:31:23 +0000 (0:00:00.764) 0:00:18.931 ****** 2025-10-09 10:33:17.081064 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.081075 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.081086 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.081097 | orchestrator | 2025-10-09 10:33:17.081108 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-10-09 10:33:17.081119 | orchestrator | Thursday 09 October 2025 10:31:24 +0000 (0:00:00.754) 0:00:19.686 ****** 2025-10-09 10:33:17.081130 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.081140 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.081158 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.081168 | orchestrator | 2025-10-09 10:33:17.081179 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-09 10:33:17.081190 | orchestrator | Thursday 09 October 2025 10:31:24 +0000 (0:00:00.586) 0:00:20.272 ****** 2025-10-09 10:33:17.081201 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.081282 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.081295 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.081308 | orchestrator | 2025-10-09 10:33:17.081328 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-09 10:33:17.081341 | orchestrator | Thursday 09 October 2025 10:31:25 +0000 (0:00:00.720) 0:00:20.993 ****** 2025-10-09 10:33:17.081355 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.081368 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.081381 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.081394 | orchestrator | 2025-10-09 10:33:17.081408 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-09 10:33:17.081421 | orchestrator | Thursday 09 October 2025 10:31:25 +0000 (0:00:00.307) 0:00:21.300 ****** 2025-10-09 10:33:17.081433 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.081446 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.081458 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.081471 | orchestrator | 2025-10-09 10:33:17.081484 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-09 10:33:17.081498 | orchestrator | Thursday 09 October 2025 10:31:26 +0000 (0:00:00.419) 0:00:21.720 ****** 2025-10-09 10:33:17.081510 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.081524 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.081537 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.081549 | orchestrator | 2025-10-09 10:33:17.081562 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-10-09 10:33:17.081575 | orchestrator | Thursday 09 October 2025 10:31:26 +0000 (0:00:00.573) 0:00:22.294 ****** 2025-10-09 10:33:17.081587 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-10-09 10:33:17.081599 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-10-09 10:33:17.081610 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-10-09 10:33:17.081622 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-10-09 10:33:17.081634 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-10-09 10:33:17.081644 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-10-09 10:33:17.081654 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-10-09 10:33:17.081664 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-10-09 10:33:17.081674 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-10-09 10:33:17.081684 | orchestrator | 2025-10-09 10:33:17.081694 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-10-09 10:33:17.081704 | orchestrator | Thursday 09 October 2025 10:31:27 +0000 (0:00:01.014) 0:00:23.309 ****** 2025-10-09 10:33:17.081714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-09 10:33:17.081725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-09 10:33:17.081735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-09 10:33:17.081745 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.081755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-09 10:33:17.081765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-09 10:33:17.081775 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-09 10:33:17.081785 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.081795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-09 10:33:17.081806 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-09 10:33:17.081815 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-09 10:33:17.081833 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.081843 | orchestrator | 2025-10-09 10:33:17.081853 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-10-09 10:33:17.081863 | orchestrator | Thursday 09 October 2025 10:31:28 +0000 (0:00:00.375) 0:00:23.684 ****** 2025-10-09 10:33:17.081874 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:33:17.081884 | orchestrator | 2025-10-09 10:33:17.081894 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-09 10:33:17.081906 | orchestrator | Thursday 09 October 2025 10:31:28 +0000 (0:00:00.631) 0:00:24.315 ****** 2025-10-09 10:33:17.081916 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.081926 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.081936 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.081947 | orchestrator | 2025-10-09 10:33:17.081963 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-09 10:33:17.081973 | orchestrator | Thursday 09 October 2025 10:31:29 +0000 (0:00:00.292) 0:00:24.608 ****** 2025-10-09 10:33:17.081983 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.081993 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.082003 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.082013 | orchestrator | 2025-10-09 10:33:17.082070 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-09 10:33:17.082082 | orchestrator | Thursday 09 October 2025 10:31:29 +0000 (0:00:00.286) 0:00:24.894 ****** 2025-10-09 10:33:17.082092 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.082102 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.082113 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:33:17.082123 | orchestrator | 2025-10-09 10:33:17.082133 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-09 10:33:17.082143 | orchestrator | Thursday 09 October 2025 10:31:29 +0000 (0:00:00.315) 0:00:25.210 ****** 2025-10-09 10:33:17.082153 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.082163 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.082173 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.082183 | orchestrator | 2025-10-09 10:33:17.082193 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-09 10:33:17.082219 | orchestrator | Thursday 09 October 2025 10:31:30 +0000 (0:00:00.533) 0:00:25.743 ****** 2025-10-09 10:33:17.082230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:33:17.082240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:33:17.082250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:33:17.082266 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.082276 | orchestrator | 2025-10-09 10:33:17.082287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-09 10:33:17.082297 | orchestrator | Thursday 09 October 2025 10:31:30 +0000 (0:00:00.373) 0:00:26.117 ****** 2025-10-09 10:33:17.082307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:33:17.082317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:33:17.082327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:33:17.082337 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.082347 | orchestrator | 2025-10-09 10:33:17.082357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-09 10:33:17.082367 | orchestrator | Thursday 09 October 2025 10:31:31 +0000 (0:00:00.342) 0:00:26.459 ****** 2025-10-09 10:33:17.082377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:33:17.082387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:33:17.082397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:33:17.082407 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.082430 | orchestrator | 2025-10-09 10:33:17.082440 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-09 10:33:17.082450 | orchestrator | Thursday 09 October 2025 10:31:31 +0000 (0:00:00.353) 0:00:26.812 ****** 2025-10-09 10:33:17.082460 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:33:17.082471 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:33:17.082481 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:33:17.082491 | orchestrator | 2025-10-09 10:33:17.082501 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-09 10:33:17.082512 | orchestrator | Thursday 09 October 2025 10:31:31 +0000 (0:00:00.337) 0:00:27.149 ****** 2025-10-09 10:33:17.082522 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-09 10:33:17.082532 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-09 10:33:17.082542 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-09 10:33:17.082552 | orchestrator | 2025-10-09 10:33:17.082562 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-10-09 10:33:17.082572 | orchestrator | Thursday 09 October 2025 10:31:32 +0000 (0:00:00.517) 0:00:27.667 ****** 2025-10-09 10:33:17.082583 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:33:17.082593 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:33:17.082603 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:33:17.082613 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-10-09 10:33:17.082623 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-09 10:33:17.082633 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-09 10:33:17.082643 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-09 10:33:17.082654 | orchestrator | 2025-10-09 10:33:17.082664 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-10-09 10:33:17.082674 | orchestrator | Thursday 09 October 2025 10:31:33 +0000 (0:00:01.063) 0:00:28.730 ****** 2025-10-09 10:33:17.082684 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:33:17.082694 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:33:17.082704 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:33:17.082714 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-10-09 10:33:17.082724 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-09 10:33:17.082734 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-09 10:33:17.082744 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-09 10:33:17.082755 | orchestrator | 2025-10-09 10:33:17.082771 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-10-09 10:33:17.082782 | orchestrator | Thursday 09 October 2025 10:31:35 +0000 (0:00:02.128) 0:00:30.859 ****** 2025-10-09 10:33:17.082792 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:33:17.082802 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:33:17.082812 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-10-09 10:33:17.082823 | orchestrator | 2025-10-09 10:33:17.082833 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-10-09 10:33:17.082843 | orchestrator | Thursday 09 October 2025 10:31:35 +0000 (0:00:00.414) 0:00:31.273 ****** 2025-10-09 10:33:17.082854 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:33:17.082871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:33:17.082886 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:33:17.082896 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:33:17.082907 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:33:17.082917 | orchestrator | 2025-10-09 10:33:17.082927 | orchestrator | TASK [generate keys] *********************************************************** 2025-10-09 10:33:17.082937 | orchestrator | Thursday 09 October 2025 10:32:22 +0000 (0:00:46.769) 0:01:18.042 ****** 2025-10-09 10:33:17.082947 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.082957 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.082966 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.082976 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.082986 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.082996 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083005 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-10-09 10:33:17.083015 | orchestrator | 2025-10-09 10:33:17.083025 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-10-09 10:33:17.083034 | orchestrator | Thursday 09 October 2025 10:32:46 +0000 (0:00:24.038) 0:01:42.081 ****** 2025-10-09 10:33:17.083044 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083054 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083063 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083073 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083083 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083093 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083102 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:33:17.083112 | orchestrator | 2025-10-09 10:33:17.083122 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-10-09 10:33:17.083131 | orchestrator | Thursday 09 October 2025 10:32:58 +0000 (0:00:12.079) 0:01:54.160 ****** 2025-10-09 10:33:17.083141 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083151 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:33:17.083160 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:33:17.083170 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083180 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:33:17.083196 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:33:17.083225 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083236 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:33:17.083247 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:33:17.083257 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083267 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:33:17.083277 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:33:17.083287 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083297 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:33:17.083307 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:33:17.083317 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:33:17.083327 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:33:17.083337 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:33:17.083347 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-10-09 10:33:17.083357 | orchestrator | 2025-10-09 10:33:17.083368 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:33:17.083382 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-10-09 10:33:17.083394 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-09 10:33:17.083404 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-10-09 10:33:17.083415 | orchestrator | 2025-10-09 10:33:17.083425 | orchestrator | 2025-10-09 10:33:17.083435 | orchestrator | 2025-10-09 10:33:17.083445 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:33:17.083455 | orchestrator | Thursday 09 October 2025 10:33:16 +0000 (0:00:17.684) 0:02:11.845 ****** 2025-10-09 10:33:17.083465 | orchestrator | =============================================================================== 2025-10-09 10:33:17.083475 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.77s 2025-10-09 10:33:17.083485 | orchestrator | generate keys ---------------------------------------------------------- 24.04s 2025-10-09 10:33:17.083496 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.68s 2025-10-09 10:33:17.083506 | orchestrator | get keys from monitors ------------------------------------------------- 12.08s 2025-10-09 10:33:17.083516 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.24s 2025-10-09 10:33:17.083526 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.13s 2025-10-09 10:33:17.083536 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.72s 2025-10-09 10:33:17.083546 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.06s 2025-10-09 10:33:17.083556 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.01s 2025-10-09 10:33:17.083566 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.89s 2025-10-09 10:33:17.083576 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.84s 2025-10-09 10:33:17.083586 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.78s 2025-10-09 10:33:17.083597 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.78s 2025-10-09 10:33:17.083613 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.76s 2025-10-09 10:33:17.083623 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2025-10-09 10:33:17.083634 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.74s 2025-10-09 10:33:17.083644 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.72s 2025-10-09 10:33:17.083653 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.71s 2025-10-09 10:33:17.083663 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.63s 2025-10-09 10:33:17.083674 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.59s 2025-10-09 10:33:17.083684 | orchestrator | 2025-10-09 10:33:17 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:17.083694 | orchestrator | 2025-10-09 10:33:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:20.123452 | orchestrator | 2025-10-09 10:33:20 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:20.123936 | orchestrator | 2025-10-09 10:33:20 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:20.125676 | orchestrator | 2025-10-09 10:33:20 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:20.126002 | orchestrator | 2025-10-09 10:33:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:23.180548 | orchestrator | 2025-10-09 10:33:23 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:23.181898 | orchestrator | 2025-10-09 10:33:23 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:23.184166 | orchestrator | 2025-10-09 10:33:23 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:23.184189 | orchestrator | 2025-10-09 10:33:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:26.232121 | orchestrator | 2025-10-09 10:33:26 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:26.234457 | orchestrator | 2025-10-09 10:33:26 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:26.236330 | orchestrator | 2025-10-09 10:33:26 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:26.236363 | orchestrator | 2025-10-09 10:33:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:29.278531 | orchestrator | 2025-10-09 10:33:29 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:29.281483 | orchestrator | 2025-10-09 10:33:29 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:29.282807 | orchestrator | 2025-10-09 10:33:29 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:29.283001 | orchestrator | 2025-10-09 10:33:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:32.328110 | orchestrator | 2025-10-09 10:33:32 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:32.331479 | orchestrator | 2025-10-09 10:33:32 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:32.332471 | orchestrator | 2025-10-09 10:33:32 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:32.332502 | orchestrator | 2025-10-09 10:33:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:35.387502 | orchestrator | 2025-10-09 10:33:35 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:35.391272 | orchestrator | 2025-10-09 10:33:35 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:35.395873 | orchestrator | 2025-10-09 10:33:35 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:35.395899 | orchestrator | 2025-10-09 10:33:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:38.443557 | orchestrator | 2025-10-09 10:33:38 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:38.445198 | orchestrator | 2025-10-09 10:33:38 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:38.446667 | orchestrator | 2025-10-09 10:33:38 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:38.446809 | orchestrator | 2025-10-09 10:33:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:41.490157 | orchestrator | 2025-10-09 10:33:41 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:41.491160 | orchestrator | 2025-10-09 10:33:41 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:41.492992 | orchestrator | 2025-10-09 10:33:41 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:41.493048 | orchestrator | 2025-10-09 10:33:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:44.553553 | orchestrator | 2025-10-09 10:33:44 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:44.557396 | orchestrator | 2025-10-09 10:33:44 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:44.561351 | orchestrator | 2025-10-09 10:33:44 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:44.561555 | orchestrator | 2025-10-09 10:33:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:47.619016 | orchestrator | 2025-10-09 10:33:47 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:47.620945 | orchestrator | 2025-10-09 10:33:47 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:47.622910 | orchestrator | 2025-10-09 10:33:47 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:47.622948 | orchestrator | 2025-10-09 10:33:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:50.677367 | orchestrator | 2025-10-09 10:33:50 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:50.679106 | orchestrator | 2025-10-09 10:33:50 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:50.681387 | orchestrator | 2025-10-09 10:33:50 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:50.681412 | orchestrator | 2025-10-09 10:33:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:53.729317 | orchestrator | 2025-10-09 10:33:53 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:53.730902 | orchestrator | 2025-10-09 10:33:53 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:53.732943 | orchestrator | 2025-10-09 10:33:53 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:53.735910 | orchestrator | 2025-10-09 10:33:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:56.774800 | orchestrator | 2025-10-09 10:33:56 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state STARTED 2025-10-09 10:33:56.775287 | orchestrator | 2025-10-09 10:33:56 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:56.776997 | orchestrator | 2025-10-09 10:33:56 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:56.777832 | orchestrator | 2025-10-09 10:33:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:59.829424 | orchestrator | 2025-10-09 10:33:59 | INFO  | Task e18a7c97-131c-4cdb-a469-08e727aee338 is in state SUCCESS 2025-10-09 10:33:59.831744 | orchestrator | 2025-10-09 10:33:59 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:33:59.834343 | orchestrator | 2025-10-09 10:33:59 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:33:59.836915 | orchestrator | 2025-10-09 10:33:59 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:33:59.836966 | orchestrator | 2025-10-09 10:33:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:02.896892 | orchestrator | 2025-10-09 10:34:02 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:02.898527 | orchestrator | 2025-10-09 10:34:02 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:02.900897 | orchestrator | 2025-10-09 10:34:02 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:34:02.900930 | orchestrator | 2025-10-09 10:34:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:05.944005 | orchestrator | 2025-10-09 10:34:05 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:05.945964 | orchestrator | 2025-10-09 10:34:05 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:05.948595 | orchestrator | 2025-10-09 10:34:05 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:34:05.948703 | orchestrator | 2025-10-09 10:34:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:08.992553 | orchestrator | 2025-10-09 10:34:08 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:08.994455 | orchestrator | 2025-10-09 10:34:08 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:08.996784 | orchestrator | 2025-10-09 10:34:08 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:34:08.996808 | orchestrator | 2025-10-09 10:34:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:12.044109 | orchestrator | 2025-10-09 10:34:12 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:12.045264 | orchestrator | 2025-10-09 10:34:12 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:12.046709 | orchestrator | 2025-10-09 10:34:12 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:34:12.046738 | orchestrator | 2025-10-09 10:34:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:15.098336 | orchestrator | 2025-10-09 10:34:15 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:15.099154 | orchestrator | 2025-10-09 10:34:15 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:15.101083 | orchestrator | 2025-10-09 10:34:15 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:34:15.101603 | orchestrator | 2025-10-09 10:34:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:18.143869 | orchestrator | 2025-10-09 10:34:18 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:18.145340 | orchestrator | 2025-10-09 10:34:18 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:18.146543 | orchestrator | 2025-10-09 10:34:18 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:34:18.146569 | orchestrator | 2025-10-09 10:34:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:21.195345 | orchestrator | 2025-10-09 10:34:21 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:21.195443 | orchestrator | 2025-10-09 10:34:21 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:21.196542 | orchestrator | 2025-10-09 10:34:21 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state STARTED 2025-10-09 10:34:21.196564 | orchestrator | 2025-10-09 10:34:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:24.242320 | orchestrator | 2025-10-09 10:34:24 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:24.244084 | orchestrator | 2025-10-09 10:34:24 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:24.246716 | orchestrator | 2025-10-09 10:34:24 | INFO  | Task 1902a84e-fb91-451c-8f39-db54bf7af20b is in state SUCCESS 2025-10-09 10:34:24.248393 | orchestrator | 2025-10-09 10:34:24.248429 | orchestrator | 2025-10-09 10:34:24.248442 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-10-09 10:34:24.248454 | orchestrator | 2025-10-09 10:34:24.248466 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-10-09 10:34:24.248478 | orchestrator | Thursday 09 October 2025 10:33:21 +0000 (0:00:00.163) 0:00:00.163 ****** 2025-10-09 10:34:24.248489 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-10-09 10:34:24.248502 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.248513 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.248524 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:34:24.248535 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.248546 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-10-09 10:34:24.248559 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-10-09 10:34:24.248570 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-10-09 10:34:24.248581 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-10-09 10:34:24.248592 | orchestrator | 2025-10-09 10:34:24.248604 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-10-09 10:34:24.248615 | orchestrator | Thursday 09 October 2025 10:33:25 +0000 (0:00:04.656) 0:00:04.820 ****** 2025-10-09 10:34:24.248626 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-10-09 10:34:24.248637 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.248648 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.248659 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:34:24.248670 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.248681 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-10-09 10:34:24.248716 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-10-09 10:34:24.248728 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-10-09 10:34:24.249018 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-10-09 10:34:24.249031 | orchestrator | 2025-10-09 10:34:24.249042 | orchestrator | TASK [Create share directory] ************************************************** 2025-10-09 10:34:24.249052 | orchestrator | Thursday 09 October 2025 10:33:30 +0000 (0:00:04.312) 0:00:09.133 ****** 2025-10-09 10:34:24.249064 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:34:24.249075 | orchestrator | 2025-10-09 10:34:24.249086 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-10-09 10:34:24.249096 | orchestrator | Thursday 09 October 2025 10:33:31 +0000 (0:00:01.503) 0:00:10.636 ****** 2025-10-09 10:34:24.249107 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-10-09 10:34:24.249118 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.249128 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.249139 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:34:24.249149 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.249160 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-10-09 10:34:24.249170 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-10-09 10:34:24.249181 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-10-09 10:34:24.249192 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-10-09 10:34:24.249246 | orchestrator | 2025-10-09 10:34:24.249258 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-10-09 10:34:24.249269 | orchestrator | Thursday 09 October 2025 10:33:46 +0000 (0:00:14.875) 0:00:25.511 ****** 2025-10-09 10:34:24.249279 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-10-09 10:34:24.249290 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-10-09 10:34:24.249311 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-10-09 10:34:24.249323 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-10-09 10:34:24.249344 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-10-09 10:34:24.249356 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-10-09 10:34:24.249366 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-10-09 10:34:24.249377 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-10-09 10:34:24.249387 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-10-09 10:34:24.249398 | orchestrator | 2025-10-09 10:34:24.249409 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-10-09 10:34:24.249419 | orchestrator | Thursday 09 October 2025 10:33:49 +0000 (0:00:03.200) 0:00:28.712 ****** 2025-10-09 10:34:24.249431 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-10-09 10:34:24.249442 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.249453 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.249463 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:34:24.249485 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-09 10:34:24.249496 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-10-09 10:34:24.249507 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-10-09 10:34:24.249518 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-10-09 10:34:24.249528 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-10-09 10:34:24.249539 | orchestrator | 2025-10-09 10:34:24.249550 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:34:24.249561 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:34:24.249573 | orchestrator | 2025-10-09 10:34:24.249584 | orchestrator | 2025-10-09 10:34:24.249595 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:34:24.249606 | orchestrator | Thursday 09 October 2025 10:33:57 +0000 (0:00:07.149) 0:00:35.862 ****** 2025-10-09 10:34:24.249617 | orchestrator | =============================================================================== 2025-10-09 10:34:24.249630 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.88s 2025-10-09 10:34:24.249642 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.15s 2025-10-09 10:34:24.249654 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.66s 2025-10-09 10:34:24.249666 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.31s 2025-10-09 10:34:24.249677 | orchestrator | Check if target directories exist --------------------------------------- 3.20s 2025-10-09 10:34:24.249688 | orchestrator | Create share directory -------------------------------------------------- 1.50s 2025-10-09 10:34:24.249699 | orchestrator | 2025-10-09 10:34:24.249709 | orchestrator | 2025-10-09 10:34:24.249720 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:34:24.249731 | orchestrator | 2025-10-09 10:34:24.249741 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:34:24.249752 | orchestrator | Thursday 09 October 2025 10:32:34 +0000 (0:00:00.279) 0:00:00.279 ****** 2025-10-09 10:34:24.249763 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.249773 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.249784 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.249795 | orchestrator | 2025-10-09 10:34:24.249806 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:34:24.249816 | orchestrator | Thursday 09 October 2025 10:32:35 +0000 (0:00:00.320) 0:00:00.599 ****** 2025-10-09 10:34:24.249827 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-10-09 10:34:24.249838 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-10-09 10:34:24.249849 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-10-09 10:34:24.249859 | orchestrator | 2025-10-09 10:34:24.249870 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-10-09 10:34:24.249881 | orchestrator | 2025-10-09 10:34:24.249892 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:34:24.249903 | orchestrator | Thursday 09 October 2025 10:32:35 +0000 (0:00:00.496) 0:00:01.096 ****** 2025-10-09 10:34:24.249913 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:34:24.249924 | orchestrator | 2025-10-09 10:34:24.249935 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-10-09 10:34:24.249945 | orchestrator | Thursday 09 October 2025 10:32:36 +0000 (0:00:00.523) 0:00:01.620 ****** 2025-10-09 10:34:24.249981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.250007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.250095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.250110 | orchestrator | 2025-10-09 10:34:24.250121 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-10-09 10:34:24.250132 | orchestrator | Thursday 09 October 2025 10:32:37 +0000 (0:00:01.317) 0:00:02.938 ****** 2025-10-09 10:34:24.250143 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.250154 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.250165 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.250176 | orchestrator | 2025-10-09 10:34:24.250187 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:34:24.250217 | orchestrator | Thursday 09 October 2025 10:32:37 +0000 (0:00:00.550) 0:00:03.489 ****** 2025-10-09 10:34:24.250229 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-09 10:34:24.250240 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-09 10:34:24.250250 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-10-09 10:34:24.250261 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-10-09 10:34:24.250272 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-10-09 10:34:24.250282 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-10-09 10:34:24.250293 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-10-09 10:34:24.250304 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-10-09 10:34:24.250315 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-09 10:34:24.250325 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-09 10:34:24.250343 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-10-09 10:34:24.250354 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-10-09 10:34:24.250365 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-10-09 10:34:24.250375 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-10-09 10:34:24.250386 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-10-09 10:34:24.250397 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-10-09 10:34:24.250408 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-09 10:34:24.250418 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-09 10:34:24.250434 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-10-09 10:34:24.250445 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-10-09 10:34:24.250456 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-10-09 10:34:24.250472 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-10-09 10:34:24.250483 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-10-09 10:34:24.250494 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-10-09 10:34:24.250506 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-10-09 10:34:24.250519 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-10-09 10:34:24.250530 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-10-09 10:34:24.250540 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-10-09 10:34:24.250551 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-10-09 10:34:24.250562 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-10-09 10:34:24.250573 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-10-09 10:34:24.250584 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-10-09 10:34:24.250594 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-10-09 10:34:24.250605 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-10-09 10:34:24.250616 | orchestrator | 2025-10-09 10:34:24.250627 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.250638 | orchestrator | Thursday 09 October 2025 10:32:38 +0000 (0:00:00.921) 0:00:04.410 ****** 2025-10-09 10:34:24.250648 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.250659 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.250670 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.250681 | orchestrator | 2025-10-09 10:34:24.250691 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.250709 | orchestrator | Thursday 09 October 2025 10:32:39 +0000 (0:00:00.315) 0:00:04.726 ****** 2025-10-09 10:34:24.250720 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.250731 | orchestrator | 2025-10-09 10:34:24.250742 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.250752 | orchestrator | Thursday 09 October 2025 10:32:39 +0000 (0:00:00.171) 0:00:04.897 ****** 2025-10-09 10:34:24.250763 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.250774 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.250784 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.250795 | orchestrator | 2025-10-09 10:34:24.250806 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.250817 | orchestrator | Thursday 09 October 2025 10:32:39 +0000 (0:00:00.491) 0:00:05.388 ****** 2025-10-09 10:34:24.250827 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.250838 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.250849 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.250860 | orchestrator | 2025-10-09 10:34:24.250871 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.250882 | orchestrator | Thursday 09 October 2025 10:32:40 +0000 (0:00:00.336) 0:00:05.725 ****** 2025-10-09 10:34:24.250892 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.250903 | orchestrator | 2025-10-09 10:34:24.250914 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.250924 | orchestrator | Thursday 09 October 2025 10:32:40 +0000 (0:00:00.152) 0:00:05.877 ****** 2025-10-09 10:34:24.250935 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.250946 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.250957 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.250968 | orchestrator | 2025-10-09 10:34:24.250979 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.250990 | orchestrator | Thursday 09 October 2025 10:32:40 +0000 (0:00:00.381) 0:00:06.258 ****** 2025-10-09 10:34:24.251000 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.251011 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.251022 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.251033 | orchestrator | 2025-10-09 10:34:24.251044 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.251055 | orchestrator | Thursday 09 October 2025 10:32:41 +0000 (0:00:00.489) 0:00:06.748 ****** 2025-10-09 10:34:24.251065 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251076 | orchestrator | 2025-10-09 10:34:24.251092 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.251103 | orchestrator | Thursday 09 October 2025 10:32:41 +0000 (0:00:00.139) 0:00:06.888 ****** 2025-10-09 10:34:24.251114 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251124 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.251135 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.251146 | orchestrator | 2025-10-09 10:34:24.251162 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.251173 | orchestrator | Thursday 09 October 2025 10:32:41 +0000 (0:00:00.540) 0:00:07.428 ****** 2025-10-09 10:34:24.251184 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.251195 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.251257 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.251269 | orchestrator | 2025-10-09 10:34:24.251280 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.251290 | orchestrator | Thursday 09 October 2025 10:32:42 +0000 (0:00:00.343) 0:00:07.772 ****** 2025-10-09 10:34:24.251301 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251312 | orchestrator | 2025-10-09 10:34:24.251323 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.251334 | orchestrator | Thursday 09 October 2025 10:32:42 +0000 (0:00:00.138) 0:00:07.911 ****** 2025-10-09 10:34:24.251352 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251362 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.251373 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.251384 | orchestrator | 2025-10-09 10:34:24.251395 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.251405 | orchestrator | Thursday 09 October 2025 10:32:42 +0000 (0:00:00.290) 0:00:08.202 ****** 2025-10-09 10:34:24.251416 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.251427 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.251438 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.251448 | orchestrator | 2025-10-09 10:34:24.251459 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.251470 | orchestrator | Thursday 09 October 2025 10:32:43 +0000 (0:00:00.559) 0:00:08.762 ****** 2025-10-09 10:34:24.251480 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251491 | orchestrator | 2025-10-09 10:34:24.251502 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.251513 | orchestrator | Thursday 09 October 2025 10:32:43 +0000 (0:00:00.146) 0:00:08.908 ****** 2025-10-09 10:34:24.251523 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251534 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.251545 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.251556 | orchestrator | 2025-10-09 10:34:24.251567 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.251577 | orchestrator | Thursday 09 October 2025 10:32:43 +0000 (0:00:00.308) 0:00:09.216 ****** 2025-10-09 10:34:24.251588 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.251599 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.251610 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.251621 | orchestrator | 2025-10-09 10:34:24.251632 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.251642 | orchestrator | Thursday 09 October 2025 10:32:43 +0000 (0:00:00.332) 0:00:09.549 ****** 2025-10-09 10:34:24.251652 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251662 | orchestrator | 2025-10-09 10:34:24.251671 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.251681 | orchestrator | Thursday 09 October 2025 10:32:44 +0000 (0:00:00.136) 0:00:09.685 ****** 2025-10-09 10:34:24.251690 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251700 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.251709 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.251719 | orchestrator | 2025-10-09 10:34:24.251728 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.251738 | orchestrator | Thursday 09 October 2025 10:32:44 +0000 (0:00:00.343) 0:00:10.029 ****** 2025-10-09 10:34:24.251747 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.251757 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.251766 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.251776 | orchestrator | 2025-10-09 10:34:24.251785 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.251795 | orchestrator | Thursday 09 October 2025 10:32:45 +0000 (0:00:00.549) 0:00:10.579 ****** 2025-10-09 10:34:24.251804 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251814 | orchestrator | 2025-10-09 10:34:24.251823 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.251833 | orchestrator | Thursday 09 October 2025 10:32:45 +0000 (0:00:00.133) 0:00:10.712 ****** 2025-10-09 10:34:24.251842 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251851 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.251861 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.251870 | orchestrator | 2025-10-09 10:34:24.251880 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.251890 | orchestrator | Thursday 09 October 2025 10:32:45 +0000 (0:00:00.333) 0:00:11.046 ****** 2025-10-09 10:34:24.251899 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.251914 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.251924 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.251933 | orchestrator | 2025-10-09 10:34:24.251943 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.251952 | orchestrator | Thursday 09 October 2025 10:32:45 +0000 (0:00:00.351) 0:00:11.397 ****** 2025-10-09 10:34:24.251962 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.251971 | orchestrator | 2025-10-09 10:34:24.251981 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.251990 | orchestrator | Thursday 09 October 2025 10:32:45 +0000 (0:00:00.135) 0:00:11.533 ****** 2025-10-09 10:34:24.252000 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.252009 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.252019 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.252028 | orchestrator | 2025-10-09 10:34:24.252038 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.252047 | orchestrator | Thursday 09 October 2025 10:32:46 +0000 (0:00:00.290) 0:00:11.823 ****** 2025-10-09 10:34:24.252057 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.252077 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.252087 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.252096 | orchestrator | 2025-10-09 10:34:24.252106 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.252116 | orchestrator | Thursday 09 October 2025 10:32:46 +0000 (0:00:00.588) 0:00:12.412 ****** 2025-10-09 10:34:24.252131 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.252140 | orchestrator | 2025-10-09 10:34:24.252150 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.252160 | orchestrator | Thursday 09 October 2025 10:32:46 +0000 (0:00:00.137) 0:00:12.549 ****** 2025-10-09 10:34:24.252169 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.252179 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.252188 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.252257 | orchestrator | 2025-10-09 10:34:24.252268 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:34:24.252278 | orchestrator | Thursday 09 October 2025 10:32:47 +0000 (0:00:00.304) 0:00:12.853 ****** 2025-10-09 10:34:24.252287 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:24.252297 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:24.252306 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:24.252316 | orchestrator | 2025-10-09 10:34:24.252325 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:34:24.252334 | orchestrator | Thursday 09 October 2025 10:32:47 +0000 (0:00:00.322) 0:00:13.176 ****** 2025-10-09 10:34:24.252344 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.252353 | orchestrator | 2025-10-09 10:34:24.252363 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:34:24.252372 | orchestrator | Thursday 09 October 2025 10:32:47 +0000 (0:00:00.135) 0:00:13.312 ****** 2025-10-09 10:34:24.252382 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.252391 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.252401 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.252410 | orchestrator | 2025-10-09 10:34:24.252420 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-10-09 10:34:24.252429 | orchestrator | Thursday 09 October 2025 10:32:48 +0000 (0:00:00.527) 0:00:13.839 ****** 2025-10-09 10:34:24.252438 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:34:24.252448 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:24.252457 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:34:24.252467 | orchestrator | 2025-10-09 10:34:24.252476 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-10-09 10:34:24.252486 | orchestrator | Thursday 09 October 2025 10:32:50 +0000 (0:00:01.935) 0:00:15.775 ****** 2025-10-09 10:34:24.252495 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-09 10:34:24.252511 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-09 10:34:24.252521 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-09 10:34:24.252530 | orchestrator | 2025-10-09 10:34:24.252540 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-10-09 10:34:24.252549 | orchestrator | Thursday 09 October 2025 10:32:52 +0000 (0:00:02.219) 0:00:17.994 ****** 2025-10-09 10:34:24.252559 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-09 10:34:24.252568 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-09 10:34:24.252578 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-09 10:34:24.252587 | orchestrator | 2025-10-09 10:34:24.252597 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-10-09 10:34:24.252606 | orchestrator | Thursday 09 October 2025 10:32:54 +0000 (0:00:02.566) 0:00:20.561 ****** 2025-10-09 10:34:24.252615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-09 10:34:24.252625 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-09 10:34:24.252634 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-09 10:34:24.252644 | orchestrator | 2025-10-09 10:34:24.252653 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-10-09 10:34:24.252663 | orchestrator | Thursday 09 October 2025 10:32:57 +0000 (0:00:02.388) 0:00:22.950 ****** 2025-10-09 10:34:24.252672 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.252681 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.252691 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.252700 | orchestrator | 2025-10-09 10:34:24.252709 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-10-09 10:34:24.252719 | orchestrator | Thursday 09 October 2025 10:32:57 +0000 (0:00:00.356) 0:00:23.306 ****** 2025-10-09 10:34:24.252728 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.252738 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.252747 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.252757 | orchestrator | 2025-10-09 10:34:24.252766 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:34:24.252776 | orchestrator | Thursday 09 October 2025 10:32:58 +0000 (0:00:00.301) 0:00:23.608 ****** 2025-10-09 10:34:24.252785 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:34:24.252795 | orchestrator | 2025-10-09 10:34:24.252804 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-10-09 10:34:24.252813 | orchestrator | Thursday 09 October 2025 10:32:58 +0000 (0:00:00.650) 0:00:24.258 ****** 2025-10-09 10:34:24.252837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.252860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.252879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.252900 | orchestrator | 2025-10-09 10:34:24.252910 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-10-09 10:34:24.252919 | orchestrator | Thursday 09 October 2025 10:33:00 +0000 (0:00:01.999) 0:00:26.257 ****** 2025-10-09 10:34:24.252942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:34:24.252954 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.252965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:34:24.252981 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.253002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:34:24.253020 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.253029 | orchestrator | 2025-10-09 10:34:24.253039 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-10-09 10:34:24.253048 | orchestrator | Thursday 09 October 2025 10:33:01 +0000 (0:00:00.709) 0:00:26.966 ****** 2025-10-09 10:34:24.253059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:34:24.253069 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.253092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:34:24.253109 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.253119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:34:24.253130 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.253139 | orchestrator | 2025-10-09 10:34:24.253149 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-10-09 10:34:24.253158 | orchestrator | Thursday 09 October 2025 10:33:02 +0000 (0:00:01.067) 0:00:28.034 ****** 2025-10-09 10:34:24.253181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.253244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.253272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20250928', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:34:24.253292 | orchestrator | 2025-10-09 10:34:24.253300 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:34:24.253308 | orchestrator | Thursday 09 October 2025 10:33:04 +0000 (0:00:01.538) 0:00:29.573 ****** 2025-10-09 10:34:24.253316 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:24.253324 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:24.253332 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:24.253340 | orchestrator | 2025-10-09 10:34:24.253347 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:34:24.253355 | orchestrator | Thursday 09 October 2025 10:33:04 +0000 (0:00:00.371) 0:00:29.944 ****** 2025-10-09 10:34:24.253363 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:34:24.253371 | orchestrator | 2025-10-09 10:34:24.253378 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-10-09 10:34:24.253386 | orchestrator | Thursday 09 October 2025 10:33:04 +0000 (0:00:00.576) 0:00:30.521 ****** 2025-10-09 10:34:24.253394 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:24.253402 | orchestrator | 2025-10-09 10:34:24.253410 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-10-09 10:34:24.253417 | orchestrator | Thursday 09 October 2025 10:33:07 +0000 (0:00:02.617) 0:00:33.138 ****** 2025-10-09 10:34:24.253425 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:24.253433 | orchestrator | 2025-10-09 10:34:24.253441 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-10-09 10:34:24.253449 | orchestrator | Thursday 09 October 2025 10:33:10 +0000 (0:00:02.825) 0:00:35.964 ****** 2025-10-09 10:34:24.253456 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:24.253464 | orchestrator | 2025-10-09 10:34:24.253472 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-09 10:34:24.253480 | orchestrator | Thursday 09 October 2025 10:33:27 +0000 (0:00:16.671) 0:00:52.635 ****** 2025-10-09 10:34:24.253492 | orchestrator | 2025-10-09 10:34:24.253500 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-09 10:34:24.253508 | orchestrator | Thursday 09 October 2025 10:33:27 +0000 (0:00:00.067) 0:00:52.702 ****** 2025-10-09 10:34:24.253516 | orchestrator | 2025-10-09 10:34:24.253523 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-09 10:34:24.253531 | orchestrator | Thursday 09 October 2025 10:33:27 +0000 (0:00:00.070) 0:00:52.773 ****** 2025-10-09 10:34:24.253539 | orchestrator | 2025-10-09 10:34:24.253547 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-10-09 10:34:24.253554 | orchestrator | Thursday 09 October 2025 10:33:27 +0000 (0:00:00.071) 0:00:52.844 ****** 2025-10-09 10:34:24.253562 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:24.253570 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:34:24.253578 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:34:24.253585 | orchestrator | 2025-10-09 10:34:24.253593 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:34:24.253604 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-10-09 10:34:24.253617 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-10-09 10:34:24.253625 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-10-09 10:34:24.253633 | orchestrator | 2025-10-09 10:34:24.253641 | orchestrator | 2025-10-09 10:34:24.253648 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:34:24.253656 | orchestrator | Thursday 09 October 2025 10:34:22 +0000 (0:00:55.676) 0:01:48.521 ****** 2025-10-09 10:34:24.253664 | orchestrator | =============================================================================== 2025-10-09 10:34:24.253672 | orchestrator | horizon : Restart horizon container ------------------------------------ 55.68s 2025-10-09 10:34:24.253679 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.67s 2025-10-09 10:34:24.253687 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.83s 2025-10-09 10:34:24.253695 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.62s 2025-10-09 10:34:24.253703 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.57s 2025-10-09 10:34:24.253710 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.39s 2025-10-09 10:34:24.253718 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.22s 2025-10-09 10:34:24.253726 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.00s 2025-10-09 10:34:24.253734 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.94s 2025-10-09 10:34:24.253742 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.54s 2025-10-09 10:34:24.253749 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.32s 2025-10-09 10:34:24.253757 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.07s 2025-10-09 10:34:24.253765 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.92s 2025-10-09 10:34:24.253772 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2025-10-09 10:34:24.253780 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2025-10-09 10:34:24.253788 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2025-10-09 10:34:24.253795 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2025-10-09 10:34:24.253803 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2025-10-09 10:34:24.253811 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.55s 2025-10-09 10:34:24.253824 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-10-09 10:34:24.253831 | orchestrator | 2025-10-09 10:34:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:27.301143 | orchestrator | 2025-10-09 10:34:27 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:27.309183 | orchestrator | 2025-10-09 10:34:27 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:27.309254 | orchestrator | 2025-10-09 10:34:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:30.356582 | orchestrator | 2025-10-09 10:34:30 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:30.358931 | orchestrator | 2025-10-09 10:34:30 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:30.358976 | orchestrator | 2025-10-09 10:34:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:33.407959 | orchestrator | 2025-10-09 10:34:33 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:33.408535 | orchestrator | 2025-10-09 10:34:33 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:33.408564 | orchestrator | 2025-10-09 10:34:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:36.457589 | orchestrator | 2025-10-09 10:34:36 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:36.461977 | orchestrator | 2025-10-09 10:34:36 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:36.462062 | orchestrator | 2025-10-09 10:34:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:39.513477 | orchestrator | 2025-10-09 10:34:39 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:39.517029 | orchestrator | 2025-10-09 10:34:39 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:39.517080 | orchestrator | 2025-10-09 10:34:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:42.566140 | orchestrator | 2025-10-09 10:34:42 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:42.570157 | orchestrator | 2025-10-09 10:34:42 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:42.570279 | orchestrator | 2025-10-09 10:34:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:45.621563 | orchestrator | 2025-10-09 10:34:45 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:45.623250 | orchestrator | 2025-10-09 10:34:45 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:45.623375 | orchestrator | 2025-10-09 10:34:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:48.667656 | orchestrator | 2025-10-09 10:34:48 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:48.668478 | orchestrator | 2025-10-09 10:34:48 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:48.668497 | orchestrator | 2025-10-09 10:34:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:51.717125 | orchestrator | 2025-10-09 10:34:51 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:51.717649 | orchestrator | 2025-10-09 10:34:51 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:51.717678 | orchestrator | 2025-10-09 10:34:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:54.761517 | orchestrator | 2025-10-09 10:34:54 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:54.763384 | orchestrator | 2025-10-09 10:34:54 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:54.763420 | orchestrator | 2025-10-09 10:34:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:57.806740 | orchestrator | 2025-10-09 10:34:57 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:34:57.807654 | orchestrator | 2025-10-09 10:34:57 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state STARTED 2025-10-09 10:34:57.807686 | orchestrator | 2025-10-09 10:34:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:00.851664 | orchestrator | 2025-10-09 10:35:00 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:35:00.854559 | orchestrator | 2025-10-09 10:35:00 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:00.856048 | orchestrator | 2025-10-09 10:35:00 | INFO  | Task 44b0bac3-91a7-4506-8ea1-eca7c72aa4db is in state STARTED 2025-10-09 10:35:00.859761 | orchestrator | 2025-10-09 10:35:00 | INFO  | Task 3ba03a07-755e-40a1-9c08-08b5236fe341 is in state SUCCESS 2025-10-09 10:35:00.861931 | orchestrator | 2025-10-09 10:35:00 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:00.862409 | orchestrator | 2025-10-09 10:35:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:03.906548 | orchestrator | 2025-10-09 10:35:03 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:35:03.909829 | orchestrator | 2025-10-09 10:35:03 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:03.911468 | orchestrator | 2025-10-09 10:35:03 | INFO  | Task 44b0bac3-91a7-4506-8ea1-eca7c72aa4db is in state STARTED 2025-10-09 10:35:03.913250 | orchestrator | 2025-10-09 10:35:03 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:03.913275 | orchestrator | 2025-10-09 10:35:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:07.010346 | orchestrator | 2025-10-09 10:35:06 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:35:07.010435 | orchestrator | 2025-10-09 10:35:06 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:07.010449 | orchestrator | 2025-10-09 10:35:06 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:07.010461 | orchestrator | 2025-10-09 10:35:06 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:07.010472 | orchestrator | 2025-10-09 10:35:06 | INFO  | Task 44b0bac3-91a7-4506-8ea1-eca7c72aa4db is in state SUCCESS 2025-10-09 10:35:07.010483 | orchestrator | 2025-10-09 10:35:06 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:07.010511 | orchestrator | 2025-10-09 10:35:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:10.003959 | orchestrator | 2025-10-09 10:35:10 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:35:10.004684 | orchestrator | 2025-10-09 10:35:10 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:10.005823 | orchestrator | 2025-10-09 10:35:10 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:10.006782 | orchestrator | 2025-10-09 10:35:10 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:10.007914 | orchestrator | 2025-10-09 10:35:10 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:10.008124 | orchestrator | 2025-10-09 10:35:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:13.040498 | orchestrator | 2025-10-09 10:35:13 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:35:13.040616 | orchestrator | 2025-10-09 10:35:13 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:13.041599 | orchestrator | 2025-10-09 10:35:13 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:13.042366 | orchestrator | 2025-10-09 10:35:13 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:13.043441 | orchestrator | 2025-10-09 10:35:13 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:13.043464 | orchestrator | 2025-10-09 10:35:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:16.080522 | orchestrator | 2025-10-09 10:35:16 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:35:16.081836 | orchestrator | 2025-10-09 10:35:16 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:16.083317 | orchestrator | 2025-10-09 10:35:16 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:16.084921 | orchestrator | 2025-10-09 10:35:16 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:16.086102 | orchestrator | 2025-10-09 10:35:16 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:16.086136 | orchestrator | 2025-10-09 10:35:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:19.125065 | orchestrator | 2025-10-09 10:35:19 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state STARTED 2025-10-09 10:35:19.125584 | orchestrator | 2025-10-09 10:35:19 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:19.127790 | orchestrator | 2025-10-09 10:35:19 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:19.129958 | orchestrator | 2025-10-09 10:35:19 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:19.129982 | orchestrator | 2025-10-09 10:35:19 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:19.129994 | orchestrator | 2025-10-09 10:35:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:22.165744 | orchestrator | 2025-10-09 10:35:22 | INFO  | Task ae0cb940-76d1-4fe2-9343-cdb496d450ad is in state SUCCESS 2025-10-09 10:35:22.167422 | orchestrator | 2025-10-09 10:35:22.167469 | orchestrator | 2025-10-09 10:35:22.167482 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-10-09 10:35:22.167494 | orchestrator | 2025-10-09 10:35:22.167506 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-10-09 10:35:22.167518 | orchestrator | Thursday 09 October 2025 10:34:01 +0000 (0:00:00.266) 0:00:00.266 ****** 2025-10-09 10:35:22.167529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-10-09 10:35:22.167542 | orchestrator | 2025-10-09 10:35:22.167553 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-10-09 10:35:22.167564 | orchestrator | Thursday 09 October 2025 10:34:01 +0000 (0:00:00.238) 0:00:00.504 ****** 2025-10-09 10:35:22.167576 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-10-09 10:35:22.167587 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-10-09 10:35:22.167599 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-10-09 10:35:22.167636 | orchestrator | 2025-10-09 10:35:22.167648 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-10-09 10:35:22.167659 | orchestrator | Thursday 09 October 2025 10:34:03 +0000 (0:00:01.377) 0:00:01.882 ****** 2025-10-09 10:35:22.167670 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-10-09 10:35:22.167681 | orchestrator | 2025-10-09 10:35:22.167692 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-10-09 10:35:22.167717 | orchestrator | Thursday 09 October 2025 10:34:04 +0000 (0:00:01.417) 0:00:03.300 ****** 2025-10-09 10:35:22.167729 | orchestrator | changed: [testbed-manager] 2025-10-09 10:35:22.167740 | orchestrator | 2025-10-09 10:35:22.167751 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-10-09 10:35:22.167762 | orchestrator | Thursday 09 October 2025 10:34:05 +0000 (0:00:01.098) 0:00:04.398 ****** 2025-10-09 10:35:22.167772 | orchestrator | changed: [testbed-manager] 2025-10-09 10:35:22.167783 | orchestrator | 2025-10-09 10:35:22.167794 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-10-09 10:35:22.167805 | orchestrator | Thursday 09 October 2025 10:34:06 +0000 (0:00:00.965) 0:00:05.363 ****** 2025-10-09 10:35:22.167816 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-10-09 10:35:22.167827 | orchestrator | ok: [testbed-manager] 2025-10-09 10:35:22.167838 | orchestrator | 2025-10-09 10:35:22.167849 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-10-09 10:35:22.167860 | orchestrator | Thursday 09 October 2025 10:34:47 +0000 (0:00:40.526) 0:00:45.890 ****** 2025-10-09 10:35:22.167871 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-10-09 10:35:22.167882 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-10-09 10:35:22.167893 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-10-09 10:35:22.167903 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-10-09 10:35:22.167914 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-10-09 10:35:22.167925 | orchestrator | 2025-10-09 10:35:22.167936 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-10-09 10:35:22.167949 | orchestrator | Thursday 09 October 2025 10:34:51 +0000 (0:00:04.333) 0:00:50.223 ****** 2025-10-09 10:35:22.167961 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-10-09 10:35:22.167973 | orchestrator | 2025-10-09 10:35:22.167986 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-10-09 10:35:22.167998 | orchestrator | Thursday 09 October 2025 10:34:52 +0000 (0:00:00.517) 0:00:50.741 ****** 2025-10-09 10:35:22.168010 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:35:22.168022 | orchestrator | 2025-10-09 10:35:22.168035 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-10-09 10:35:22.168046 | orchestrator | Thursday 09 October 2025 10:34:52 +0000 (0:00:00.148) 0:00:50.889 ****** 2025-10-09 10:35:22.168059 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:35:22.168071 | orchestrator | 2025-10-09 10:35:22.168083 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-10-09 10:35:22.168095 | orchestrator | Thursday 09 October 2025 10:34:52 +0000 (0:00:00.520) 0:00:51.410 ****** 2025-10-09 10:35:22.168107 | orchestrator | changed: [testbed-manager] 2025-10-09 10:35:22.168120 | orchestrator | 2025-10-09 10:35:22.168132 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-10-09 10:35:22.168144 | orchestrator | Thursday 09 October 2025 10:34:54 +0000 (0:00:01.587) 0:00:52.997 ****** 2025-10-09 10:35:22.168157 | orchestrator | changed: [testbed-manager] 2025-10-09 10:35:22.168169 | orchestrator | 2025-10-09 10:35:22.168181 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-10-09 10:35:22.168215 | orchestrator | Thursday 09 October 2025 10:34:55 +0000 (0:00:00.840) 0:00:53.837 ****** 2025-10-09 10:35:22.168228 | orchestrator | changed: [testbed-manager] 2025-10-09 10:35:22.168240 | orchestrator | 2025-10-09 10:35:22.168261 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-10-09 10:35:22.168274 | orchestrator | Thursday 09 October 2025 10:34:55 +0000 (0:00:00.642) 0:00:54.480 ****** 2025-10-09 10:35:22.168286 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-10-09 10:35:22.168297 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-10-09 10:35:22.168308 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-10-09 10:35:22.168319 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-10-09 10:35:22.168329 | orchestrator | 2025-10-09 10:35:22.168340 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:35:22.168351 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:35:22.168364 | orchestrator | 2025-10-09 10:35:22.168375 | orchestrator | 2025-10-09 10:35:22.168399 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:35:22.168410 | orchestrator | Thursday 09 October 2025 10:34:57 +0000 (0:00:01.528) 0:00:56.008 ****** 2025-10-09 10:35:22.168421 | orchestrator | =============================================================================== 2025-10-09 10:35:22.168432 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.53s 2025-10-09 10:35:22.168443 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.33s 2025-10-09 10:35:22.168454 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.59s 2025-10-09 10:35:22.168465 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2025-10-09 10:35:22.168475 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.42s 2025-10-09 10:35:22.168486 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.38s 2025-10-09 10:35:22.168497 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.10s 2025-10-09 10:35:22.168507 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2025-10-09 10:35:22.168518 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2025-10-09 10:35:22.168528 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2025-10-09 10:35:22.168539 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.52s 2025-10-09 10:35:22.168550 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.52s 2025-10-09 10:35:22.168566 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-10-09 10:35:22.168577 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-10-09 10:35:22.168587 | orchestrator | 2025-10-09 10:35:22.168598 | orchestrator | 2025-10-09 10:35:22.168608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:35:22.168619 | orchestrator | 2025-10-09 10:35:22.168630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:35:22.168640 | orchestrator | Thursday 09 October 2025 10:35:02 +0000 (0:00:00.206) 0:00:00.206 ****** 2025-10-09 10:35:22.168651 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.168662 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:35:22.168673 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:35:22.168683 | orchestrator | 2025-10-09 10:35:22.168694 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:35:22.168705 | orchestrator | Thursday 09 October 2025 10:35:02 +0000 (0:00:00.334) 0:00:00.541 ****** 2025-10-09 10:35:22.168716 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-10-09 10:35:22.168726 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-10-09 10:35:22.168737 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-10-09 10:35:22.168748 | orchestrator | 2025-10-09 10:35:22.168758 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-10-09 10:35:22.168769 | orchestrator | 2025-10-09 10:35:22.168787 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-10-09 10:35:22.168798 | orchestrator | Thursday 09 October 2025 10:35:03 +0000 (0:00:00.738) 0:00:01.280 ****** 2025-10-09 10:35:22.168808 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:35:22.168819 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.168830 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:35:22.168840 | orchestrator | 2025-10-09 10:35:22.168851 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:35:22.168862 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:35:22.168874 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:35:22.168885 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:35:22.168896 | orchestrator | 2025-10-09 10:35:22.168907 | orchestrator | 2025-10-09 10:35:22.168917 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:35:22.168928 | orchestrator | Thursday 09 October 2025 10:35:04 +0000 (0:00:00.805) 0:00:02.086 ****** 2025-10-09 10:35:22.168939 | orchestrator | =============================================================================== 2025-10-09 10:35:22.168950 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.81s 2025-10-09 10:35:22.168960 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-10-09 10:35:22.168971 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-10-09 10:35:22.168982 | orchestrator | 2025-10-09 10:35:22.168993 | orchestrator | 2025-10-09 10:35:22.169003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:35:22.169014 | orchestrator | 2025-10-09 10:35:22.169025 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:35:22.169035 | orchestrator | Thursday 09 October 2025 10:32:34 +0000 (0:00:00.291) 0:00:00.292 ****** 2025-10-09 10:35:22.169046 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.169057 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:35:22.169068 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:35:22.169078 | orchestrator | 2025-10-09 10:35:22.169089 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:35:22.169100 | orchestrator | Thursday 09 October 2025 10:32:35 +0000 (0:00:00.332) 0:00:00.624 ****** 2025-10-09 10:35:22.169111 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-10-09 10:35:22.169121 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-10-09 10:35:22.169132 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-10-09 10:35:22.169143 | orchestrator | 2025-10-09 10:35:22.169154 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-10-09 10:35:22.169165 | orchestrator | 2025-10-09 10:35:22.169181 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:35:22.169206 | orchestrator | Thursday 09 October 2025 10:32:35 +0000 (0:00:00.501) 0:00:01.126 ****** 2025-10-09 10:35:22.169218 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:35:22.169229 | orchestrator | 2025-10-09 10:35:22.169239 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-10-09 10:35:22.169250 | orchestrator | Thursday 09 October 2025 10:32:36 +0000 (0:00:00.557) 0:00:01.683 ****** 2025-10-09 10:35:22.169273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.169298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.169312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.169330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169412 | orchestrator | 2025-10-09 10:35:22.169424 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-10-09 10:35:22.169434 | orchestrator | Thursday 09 October 2025 10:32:38 +0000 (0:00:01.892) 0:00:03.575 ****** 2025-10-09 10:35:22.169445 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-10-09 10:35:22.169457 | orchestrator | 2025-10-09 10:35:22.169467 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-10-09 10:35:22.169478 | orchestrator | Thursday 09 October 2025 10:32:39 +0000 (0:00:01.004) 0:00:04.580 ****** 2025-10-09 10:35:22.169489 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.169500 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:35:22.169511 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:35:22.169521 | orchestrator | 2025-10-09 10:35:22.169532 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-10-09 10:35:22.169543 | orchestrator | Thursday 09 October 2025 10:32:39 +0000 (0:00:00.494) 0:00:05.075 ****** 2025-10-09 10:35:22.169554 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:35:22.169565 | orchestrator | 2025-10-09 10:35:22.169576 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:35:22.169593 | orchestrator | Thursday 09 October 2025 10:32:40 +0000 (0:00:00.696) 0:00:05.771 ****** 2025-10-09 10:35:22.169604 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:35:22.169621 | orchestrator | 2025-10-09 10:35:22.169632 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-10-09 10:35:22.169643 | orchestrator | Thursday 09 October 2025 10:32:40 +0000 (0:00:00.567) 0:00:06.339 ****** 2025-10-09 10:35:22.169660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.169672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.169685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.169697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.169784 | orchestrator | 2025-10-09 10:35:22.169795 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-10-09 10:35:22.169806 | orchestrator | Thursday 09 October 2025 10:32:44 +0000 (0:00:03.571) 0:00:09.910 ****** 2025-10-09 10:35:22.169823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:35:22.169842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.169863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:35:22.169875 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.169887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:35:22.169899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.169910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:35:22.169928 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.169947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:35:22.169964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.169976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:35:22.169987 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.169998 | orchestrator | 2025-10-09 10:35:22.170009 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-10-09 10:35:22.170067 | orchestrator | Thursday 09 October 2025 10:32:45 +0000 (0:00:00.896) 0:00:10.807 ****** 2025-10-09 10:35:22.170080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:35:22.170092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.170729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:35:22.170754 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.170772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:35:22.170783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.170794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:35:22.170804 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.170814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:35:22.170844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.170855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:35:22.170865 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.170875 | orchestrator | 2025-10-09 10:35:22.170885 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-10-09 10:35:22.170894 | orchestrator | Thursday 09 October 2025 10:32:46 +0000 (0:00:00.790) 0:00:11.597 ****** 2025-10-09 10:35:22.170909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.170921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.170945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.170956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.170971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.170981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.170992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171028 | orchestrator | 2025-10-09 10:35:22.171038 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-10-09 10:35:22.171048 | orchestrator | Thursday 09 October 2025 10:32:49 +0000 (0:00:03.256) 0:00:14.854 ****** 2025-10-09 10:35:22.171065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.171080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.171091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.171108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.171124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.171135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.171150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171187 | orchestrator | 2025-10-09 10:35:22.171223 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-10-09 10:35:22.171233 | orchestrator | Thursday 09 October 2025 10:32:55 +0000 (0:00:06.023) 0:00:20.878 ****** 2025-10-09 10:35:22.171243 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:35:22.171253 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.171263 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:35:22.171272 | orchestrator | 2025-10-09 10:35:22.171282 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-10-09 10:35:22.171292 | orchestrator | Thursday 09 October 2025 10:32:57 +0000 (0:00:01.712) 0:00:22.590 ****** 2025-10-09 10:35:22.171301 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.171311 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.171320 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.171331 | orchestrator | 2025-10-09 10:35:22.171342 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-10-09 10:35:22.171353 | orchestrator | Thursday 09 October 2025 10:32:57 +0000 (0:00:00.574) 0:00:23.164 ****** 2025-10-09 10:35:22.171364 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.171375 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.171385 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.171396 | orchestrator | 2025-10-09 10:35:22.171406 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-10-09 10:35:22.171417 | orchestrator | Thursday 09 October 2025 10:32:57 +0000 (0:00:00.308) 0:00:23.473 ****** 2025-10-09 10:35:22.171428 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.171438 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.171449 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.171460 | orchestrator | 2025-10-09 10:35:22.171545 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-10-09 10:35:22.171558 | orchestrator | Thursday 09 October 2025 10:32:58 +0000 (0:00:00.540) 0:00:24.014 ****** 2025-10-09 10:35:22.171577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.171595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.171619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.171631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.171648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.171661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:35:22.171677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.171713 | orchestrator | 2025-10-09 10:35:22.171723 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:35:22.171732 | orchestrator | Thursday 09 October 2025 10:33:00 +0000 (0:00:02.537) 0:00:26.552 ****** 2025-10-09 10:35:22.171742 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.171752 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.171761 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.171771 | orchestrator | 2025-10-09 10:35:22.171781 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-10-09 10:35:22.171790 | orchestrator | Thursday 09 October 2025 10:33:01 +0000 (0:00:00.325) 0:00:26.877 ****** 2025-10-09 10:35:22.171800 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-09 10:35:22.171810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-09 10:35:22.171820 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-09 10:35:22.171829 | orchestrator | 2025-10-09 10:35:22.171839 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-10-09 10:35:22.171848 | orchestrator | Thursday 09 October 2025 10:33:03 +0000 (0:00:01.718) 0:00:28.595 ****** 2025-10-09 10:35:22.171858 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:35:22.171867 | orchestrator | 2025-10-09 10:35:22.171877 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-10-09 10:35:22.171887 | orchestrator | Thursday 09 October 2025 10:33:04 +0000 (0:00:01.092) 0:00:29.688 ****** 2025-10-09 10:35:22.171896 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.171906 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.171915 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.171925 | orchestrator | 2025-10-09 10:35:22.171934 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-10-09 10:35:22.171944 | orchestrator | Thursday 09 October 2025 10:33:04 +0000 (0:00:00.829) 0:00:30.517 ****** 2025-10-09 10:35:22.171954 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-09 10:35:22.171967 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-09 10:35:22.171977 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:35:22.171987 | orchestrator | 2025-10-09 10:35:22.171997 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-10-09 10:35:22.172006 | orchestrator | Thursday 09 October 2025 10:33:06 +0000 (0:00:01.216) 0:00:31.734 ****** 2025-10-09 10:35:22.172016 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.172032 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:35:22.172041 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:35:22.172051 | orchestrator | 2025-10-09 10:35:22.172061 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-10-09 10:35:22.172070 | orchestrator | Thursday 09 October 2025 10:33:06 +0000 (0:00:00.324) 0:00:32.058 ****** 2025-10-09 10:35:22.172080 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-09 10:35:22.172090 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-09 10:35:22.172100 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-09 10:35:22.172109 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-09 10:35:22.172119 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-09 10:35:22.172128 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-09 10:35:22.172143 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-09 10:35:22.172153 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-09 10:35:22.172163 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-09 10:35:22.172172 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-09 10:35:22.172182 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-09 10:35:22.172208 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-09 10:35:22.172219 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-09 10:35:22.172229 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-09 10:35:22.172238 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-09 10:35:22.172248 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:35:22.172258 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:35:22.172267 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:35:22.172277 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:35:22.172287 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:35:22.172296 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:35:22.172305 | orchestrator | 2025-10-09 10:35:22.172315 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-10-09 10:35:22.172324 | orchestrator | Thursday 09 October 2025 10:33:15 +0000 (0:00:09.169) 0:00:41.227 ****** 2025-10-09 10:35:22.172334 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:35:22.172344 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:35:22.172353 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:35:22.172363 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:35:22.172372 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:35:22.172382 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:35:22.172391 | orchestrator | 2025-10-09 10:35:22.172408 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-10-09 10:35:22.172418 | orchestrator | Thursday 09 October 2025 10:33:18 +0000 (0:00:02.846) 0:00:44.074 ****** 2025-10-09 10:35:22.172436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.172452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.172464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:35:22.172474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.172485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.172509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:35:22.172519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.172533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.172544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:35:22.172554 | orchestrator | 2025-10-09 10:35:22.172563 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:35:22.172573 | orchestrator | Thursday 09 October 2025 10:33:20 +0000 (0:00:02.364) 0:00:46.438 ****** 2025-10-09 10:35:22.172583 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.172593 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.172602 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.172612 | orchestrator | 2025-10-09 10:35:22.172621 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-10-09 10:35:22.172630 | orchestrator | Thursday 09 October 2025 10:33:21 +0000 (0:00:00.319) 0:00:46.758 ****** 2025-10-09 10:35:22.172640 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.172649 | orchestrator | 2025-10-09 10:35:22.172659 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-10-09 10:35:22.172677 | orchestrator | Thursday 09 October 2025 10:33:23 +0000 (0:00:02.367) 0:00:49.125 ****** 2025-10-09 10:35:22.172687 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.172696 | orchestrator | 2025-10-09 10:35:22.172706 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-10-09 10:35:22.172715 | orchestrator | Thursday 09 October 2025 10:33:25 +0000 (0:00:02.251) 0:00:51.376 ****** 2025-10-09 10:35:22.172725 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.172734 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:35:22.172744 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:35:22.172753 | orchestrator | 2025-10-09 10:35:22.172763 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-10-09 10:35:22.172772 | orchestrator | Thursday 09 October 2025 10:33:26 +0000 (0:00:00.829) 0:00:52.206 ****** 2025-10-09 10:35:22.172782 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.172792 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:35:22.172801 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:35:22.172811 | orchestrator | 2025-10-09 10:35:22.172820 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-10-09 10:35:22.172830 | orchestrator | Thursday 09 October 2025 10:33:27 +0000 (0:00:00.550) 0:00:52.756 ****** 2025-10-09 10:35:22.172839 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.172849 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.172858 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.172868 | orchestrator | 2025-10-09 10:35:22.172877 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-10-09 10:35:22.172887 | orchestrator | Thursday 09 October 2025 10:33:27 +0000 (0:00:00.359) 0:00:53.115 ****** 2025-10-09 10:35:22.172897 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.172906 | orchestrator | 2025-10-09 10:35:22.172916 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-10-09 10:35:22.172926 | orchestrator | Thursday 09 October 2025 10:33:42 +0000 (0:00:15.144) 0:01:08.260 ****** 2025-10-09 10:35:22.172935 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.172945 | orchestrator | 2025-10-09 10:35:22.172959 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-09 10:35:22.172969 | orchestrator | Thursday 09 October 2025 10:33:53 +0000 (0:00:11.129) 0:01:19.390 ****** 2025-10-09 10:35:22.172978 | orchestrator | 2025-10-09 10:35:22.172988 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-09 10:35:22.172998 | orchestrator | Thursday 09 October 2025 10:33:53 +0000 (0:00:00.065) 0:01:19.455 ****** 2025-10-09 10:35:22.173007 | orchestrator | 2025-10-09 10:35:22.173017 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-09 10:35:22.173026 | orchestrator | Thursday 09 October 2025 10:33:53 +0000 (0:00:00.062) 0:01:19.518 ****** 2025-10-09 10:35:22.173036 | orchestrator | 2025-10-09 10:35:22.173045 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-10-09 10:35:22.173146 | orchestrator | Thursday 09 October 2025 10:33:54 +0000 (0:00:00.072) 0:01:19.590 ****** 2025-10-09 10:35:22.173161 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.173171 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:35:22.173181 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:35:22.173190 | orchestrator | 2025-10-09 10:35:22.173216 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-10-09 10:35:22.173225 | orchestrator | Thursday 09 October 2025 10:34:12 +0000 (0:00:18.577) 0:01:38.168 ****** 2025-10-09 10:35:22.173235 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:35:22.173244 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:35:22.173315 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.173327 | orchestrator | 2025-10-09 10:35:22.173336 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-10-09 10:35:22.173346 | orchestrator | Thursday 09 October 2025 10:34:20 +0000 (0:00:07.718) 0:01:45.886 ****** 2025-10-09 10:35:22.173369 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.173379 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:35:22.173389 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:35:22.173398 | orchestrator | 2025-10-09 10:35:22.173408 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:35:22.173418 | orchestrator | Thursday 09 October 2025 10:34:27 +0000 (0:00:07.128) 0:01:53.015 ****** 2025-10-09 10:35:22.173427 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:35:22.173437 | orchestrator | 2025-10-09 10:35:22.173446 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-10-09 10:35:22.173456 | orchestrator | Thursday 09 October 2025 10:34:28 +0000 (0:00:00.801) 0:01:53.817 ****** 2025-10-09 10:35:22.173465 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.173475 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:35:22.173484 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:35:22.173494 | orchestrator | 2025-10-09 10:35:22.173503 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-10-09 10:35:22.173512 | orchestrator | Thursday 09 October 2025 10:34:29 +0000 (0:00:00.755) 0:01:54.573 ****** 2025-10-09 10:35:22.173522 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:35:22.173531 | orchestrator | 2025-10-09 10:35:22.173541 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-10-09 10:35:22.173550 | orchestrator | Thursday 09 October 2025 10:34:30 +0000 (0:00:01.782) 0:01:56.355 ****** 2025-10-09 10:35:22.173560 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-10-09 10:35:22.173569 | orchestrator | 2025-10-09 10:35:22.173578 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-10-09 10:35:22.173588 | orchestrator | Thursday 09 October 2025 10:34:43 +0000 (0:00:12.466) 0:02:08.822 ****** 2025-10-09 10:35:22.173598 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-10-09 10:35:22.173607 | orchestrator | 2025-10-09 10:35:22.173617 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-10-09 10:35:22.173626 | orchestrator | Thursday 09 October 2025 10:35:08 +0000 (0:00:25.409) 0:02:34.231 ****** 2025-10-09 10:35:22.173636 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-10-09 10:35:22.173646 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-10-09 10:35:22.173655 | orchestrator | 2025-10-09 10:35:22.173664 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-10-09 10:35:22.173674 | orchestrator | Thursday 09 October 2025 10:35:15 +0000 (0:00:07.152) 0:02:41.383 ****** 2025-10-09 10:35:22.173683 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.173693 | orchestrator | 2025-10-09 10:35:22.173702 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-10-09 10:35:22.173712 | orchestrator | Thursday 09 October 2025 10:35:15 +0000 (0:00:00.168) 0:02:41.552 ****** 2025-10-09 10:35:22.173721 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.173731 | orchestrator | 2025-10-09 10:35:22.173740 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-10-09 10:35:22.173750 | orchestrator | Thursday 09 October 2025 10:35:16 +0000 (0:00:00.205) 0:02:41.758 ****** 2025-10-09 10:35:22.173759 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.173768 | orchestrator | 2025-10-09 10:35:22.173778 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-10-09 10:35:22.173787 | orchestrator | Thursday 09 October 2025 10:35:16 +0000 (0:00:00.132) 0:02:41.890 ****** 2025-10-09 10:35:22.173797 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.173806 | orchestrator | 2025-10-09 10:35:22.173816 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-10-09 10:35:22.173825 | orchestrator | Thursday 09 October 2025 10:35:16 +0000 (0:00:00.625) 0:02:42.515 ****** 2025-10-09 10:35:22.173835 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:35:22.173852 | orchestrator | 2025-10-09 10:35:22.173862 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:35:22.173871 | orchestrator | Thursday 09 October 2025 10:35:20 +0000 (0:00:03.139) 0:02:45.655 ****** 2025-10-09 10:35:22.173881 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:35:22.173890 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:35:22.173900 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:35:22.173909 | orchestrator | 2025-10-09 10:35:22.173925 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:35:22.173935 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-10-09 10:35:22.173947 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-10-09 10:35:22.173959 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-10-09 10:35:22.173969 | orchestrator | 2025-10-09 10:35:22.173980 | orchestrator | 2025-10-09 10:35:22.173991 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:35:22.174001 | orchestrator | Thursday 09 October 2025 10:35:20 +0000 (0:00:00.617) 0:02:46.272 ****** 2025-10-09 10:35:22.174012 | orchestrator | =============================================================================== 2025-10-09 10:35:22.174055 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.41s 2025-10-09 10:35:22.174066 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 18.58s 2025-10-09 10:35:22.174076 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.14s 2025-10-09 10:35:22.174087 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.47s 2025-10-09 10:35:22.174103 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.13s 2025-10-09 10:35:22.174114 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.17s 2025-10-09 10:35:22.174125 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.72s 2025-10-09 10:35:22.174135 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.15s 2025-10-09 10:35:22.174146 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.13s 2025-10-09 10:35:22.174157 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.02s 2025-10-09 10:35:22.174168 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.57s 2025-10-09 10:35:22.174178 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.26s 2025-10-09 10:35:22.174190 | orchestrator | keystone : Creating default user role ----------------------------------- 3.14s 2025-10-09 10:35:22.174230 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.85s 2025-10-09 10:35:22.174241 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.54s 2025-10-09 10:35:22.174251 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.37s 2025-10-09 10:35:22.174262 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.36s 2025-10-09 10:35:22.174272 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.25s 2025-10-09 10:35:22.174284 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.89s 2025-10-09 10:35:22.174295 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2025-10-09 10:35:22.174305 | orchestrator | 2025-10-09 10:35:22 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:22.174315 | orchestrator | 2025-10-09 10:35:22 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:22.174325 | orchestrator | 2025-10-09 10:35:22 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:22.174343 | orchestrator | 2025-10-09 10:35:22 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:22.174353 | orchestrator | 2025-10-09 10:35:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:25.211639 | orchestrator | 2025-10-09 10:35:25 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:25.214416 | orchestrator | 2025-10-09 10:35:25 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:25.215683 | orchestrator | 2025-10-09 10:35:25 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:25.219857 | orchestrator | 2025-10-09 10:35:25 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:25.220812 | orchestrator | 2025-10-09 10:35:25 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:25.221673 | orchestrator | 2025-10-09 10:35:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:28.263660 | orchestrator | 2025-10-09 10:35:28 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:28.264743 | orchestrator | 2025-10-09 10:35:28 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:28.266118 | orchestrator | 2025-10-09 10:35:28 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:28.267336 | orchestrator | 2025-10-09 10:35:28 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:28.268658 | orchestrator | 2025-10-09 10:35:28 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:28.269079 | orchestrator | 2025-10-09 10:35:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:31.304604 | orchestrator | 2025-10-09 10:35:31 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:31.306621 | orchestrator | 2025-10-09 10:35:31 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:31.307691 | orchestrator | 2025-10-09 10:35:31 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:31.308325 | orchestrator | 2025-10-09 10:35:31 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:31.309388 | orchestrator | 2025-10-09 10:35:31 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:31.309839 | orchestrator | 2025-10-09 10:35:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:34.362349 | orchestrator | 2025-10-09 10:35:34 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:34.362474 | orchestrator | 2025-10-09 10:35:34 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:34.362490 | orchestrator | 2025-10-09 10:35:34 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:34.362501 | orchestrator | 2025-10-09 10:35:34 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:34.362512 | orchestrator | 2025-10-09 10:35:34 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:34.362524 | orchestrator | 2025-10-09 10:35:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:37.390348 | orchestrator | 2025-10-09 10:35:37 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:37.390436 | orchestrator | 2025-10-09 10:35:37 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:37.390472 | orchestrator | 2025-10-09 10:35:37 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:37.390483 | orchestrator | 2025-10-09 10:35:37 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:37.390494 | orchestrator | 2025-10-09 10:35:37 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:37.390505 | orchestrator | 2025-10-09 10:35:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:40.430717 | orchestrator | 2025-10-09 10:35:40 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:40.430819 | orchestrator | 2025-10-09 10:35:40 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:40.430834 | orchestrator | 2025-10-09 10:35:40 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:40.431350 | orchestrator | 2025-10-09 10:35:40 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:40.432176 | orchestrator | 2025-10-09 10:35:40 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:40.432218 | orchestrator | 2025-10-09 10:35:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:43.474427 | orchestrator | 2025-10-09 10:35:43 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:43.475059 | orchestrator | 2025-10-09 10:35:43 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:43.475598 | orchestrator | 2025-10-09 10:35:43 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state STARTED 2025-10-09 10:35:43.476185 | orchestrator | 2025-10-09 10:35:43 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:43.476952 | orchestrator | 2025-10-09 10:35:43 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:43.476979 | orchestrator | 2025-10-09 10:35:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:46.504459 | orchestrator | 2025-10-09 10:35:46 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:46.504571 | orchestrator | 2025-10-09 10:35:46 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:46.504747 | orchestrator | 2025-10-09 10:35:46 | INFO  | Task 7b404263-b807-46fe-b408-86cef2572c16 is in state SUCCESS 2025-10-09 10:35:46.505359 | orchestrator | 2025-10-09 10:35:46 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:46.505965 | orchestrator | 2025-10-09 10:35:46 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:46.505990 | orchestrator | 2025-10-09 10:35:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:49.536147 | orchestrator | 2025-10-09 10:35:49 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:49.536318 | orchestrator | 2025-10-09 10:35:49 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:49.536770 | orchestrator | 2025-10-09 10:35:49 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:35:49.537362 | orchestrator | 2025-10-09 10:35:49 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:49.538442 | orchestrator | 2025-10-09 10:35:49 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:49.538533 | orchestrator | 2025-10-09 10:35:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:52.576435 | orchestrator | 2025-10-09 10:35:52 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:52.577297 | orchestrator | 2025-10-09 10:35:52 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:52.579106 | orchestrator | 2025-10-09 10:35:52 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:35:52.580878 | orchestrator | 2025-10-09 10:35:52 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:52.582597 | orchestrator | 2025-10-09 10:35:52 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:52.583167 | orchestrator | 2025-10-09 10:35:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:55.636907 | orchestrator | 2025-10-09 10:35:55 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:55.640629 | orchestrator | 2025-10-09 10:35:55 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:55.644181 | orchestrator | 2025-10-09 10:35:55 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:35:55.645433 | orchestrator | 2025-10-09 10:35:55 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:55.647903 | orchestrator | 2025-10-09 10:35:55 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:55.647925 | orchestrator | 2025-10-09 10:35:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:58.684724 | orchestrator | 2025-10-09 10:35:58 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:35:58.686001 | orchestrator | 2025-10-09 10:35:58 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:35:58.686735 | orchestrator | 2025-10-09 10:35:58 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:35:58.687433 | orchestrator | 2025-10-09 10:35:58 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:35:58.688314 | orchestrator | 2025-10-09 10:35:58 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:35:58.688527 | orchestrator | 2025-10-09 10:35:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:01.734957 | orchestrator | 2025-10-09 10:36:01 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:36:01.735410 | orchestrator | 2025-10-09 10:36:01 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:01.736783 | orchestrator | 2025-10-09 10:36:01 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:01.739084 | orchestrator | 2025-10-09 10:36:01 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:01.740047 | orchestrator | 2025-10-09 10:36:01 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:01.740105 | orchestrator | 2025-10-09 10:36:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:04.781138 | orchestrator | 2025-10-09 10:36:04 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:36:04.781870 | orchestrator | 2025-10-09 10:36:04 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:04.783904 | orchestrator | 2025-10-09 10:36:04 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:04.784986 | orchestrator | 2025-10-09 10:36:04 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:04.787411 | orchestrator | 2025-10-09 10:36:04 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:04.787439 | orchestrator | 2025-10-09 10:36:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:07.828579 | orchestrator | 2025-10-09 10:36:07 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:36:07.828674 | orchestrator | 2025-10-09 10:36:07 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:07.828688 | orchestrator | 2025-10-09 10:36:07 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:07.828698 | orchestrator | 2025-10-09 10:36:07 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:07.828708 | orchestrator | 2025-10-09 10:36:07 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:07.828718 | orchestrator | 2025-10-09 10:36:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:10.852045 | orchestrator | 2025-10-09 10:36:10 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:36:10.852429 | orchestrator | 2025-10-09 10:36:10 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:10.854013 | orchestrator | 2025-10-09 10:36:10 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:10.854878 | orchestrator | 2025-10-09 10:36:10 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:10.855900 | orchestrator | 2025-10-09 10:36:10 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:10.855920 | orchestrator | 2025-10-09 10:36:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:13.891934 | orchestrator | 2025-10-09 10:36:13 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:36:13.892454 | orchestrator | 2025-10-09 10:36:13 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:13.893710 | orchestrator | 2025-10-09 10:36:13 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:13.895088 | orchestrator | 2025-10-09 10:36:13 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:13.895848 | orchestrator | 2025-10-09 10:36:13 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:13.895868 | orchestrator | 2025-10-09 10:36:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:16.926473 | orchestrator | 2025-10-09 10:36:16 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:36:16.927528 | orchestrator | 2025-10-09 10:36:16 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:16.928347 | orchestrator | 2025-10-09 10:36:16 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:16.929640 | orchestrator | 2025-10-09 10:36:16 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:16.930699 | orchestrator | 2025-10-09 10:36:16 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:16.930718 | orchestrator | 2025-10-09 10:36:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:19.972613 | orchestrator | 2025-10-09 10:36:19 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:36:19.972933 | orchestrator | 2025-10-09 10:36:19 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:19.973280 | orchestrator | 2025-10-09 10:36:19 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:19.975467 | orchestrator | 2025-10-09 10:36:19 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:19.975922 | orchestrator | 2025-10-09 10:36:19 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:19.975945 | orchestrator | 2025-10-09 10:36:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:23.011618 | orchestrator | 2025-10-09 10:36:23 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state STARTED 2025-10-09 10:36:23.013573 | orchestrator | 2025-10-09 10:36:23 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:23.015563 | orchestrator | 2025-10-09 10:36:23 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:23.016677 | orchestrator | 2025-10-09 10:36:23 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:23.019078 | orchestrator | 2025-10-09 10:36:23 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:23.019103 | orchestrator | 2025-10-09 10:36:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:26.057467 | orchestrator | 2025-10-09 10:36:26 | INFO  | Task a68be144-678f-45ed-9b54-dd2a288fbc7e is in state SUCCESS 2025-10-09 10:36:26.058630 | orchestrator | 2025-10-09 10:36:26.058661 | orchestrator | 2025-10-09 10:36:26.058672 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:36:26.058683 | orchestrator | 2025-10-09 10:36:26.058693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:36:26.058703 | orchestrator | Thursday 09 October 2025 10:35:11 +0000 (0:00:00.311) 0:00:00.311 ****** 2025-10-09 10:36:26.058713 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:36:26.058724 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:36:26.058734 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:36:26.058744 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:26.058753 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:26.058763 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:26.058773 | orchestrator | ok: [testbed-manager] 2025-10-09 10:36:26.058782 | orchestrator | 2025-10-09 10:36:26.058792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:36:26.058802 | orchestrator | Thursday 09 October 2025 10:35:12 +0000 (0:00:01.015) 0:00:01.326 ****** 2025-10-09 10:36:26.058812 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-10-09 10:36:26.058822 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-10-09 10:36:26.058831 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-10-09 10:36:26.058841 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-10-09 10:36:26.058851 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-10-09 10:36:26.058860 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-10-09 10:36:26.058870 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-10-09 10:36:26.058879 | orchestrator | 2025-10-09 10:36:26.058889 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-10-09 10:36:26.058898 | orchestrator | 2025-10-09 10:36:26.058908 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-10-09 10:36:26.058918 | orchestrator | Thursday 09 October 2025 10:35:13 +0000 (0:00:01.303) 0:00:02.629 ****** 2025-10-09 10:36:26.058929 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2025-10-09 10:36:26.058941 | orchestrator | 2025-10-09 10:36:26.058951 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-10-09 10:36:26.058961 | orchestrator | Thursday 09 October 2025 10:35:16 +0000 (0:00:02.698) 0:00:05.327 ****** 2025-10-09 10:36:26.058992 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-10-09 10:36:26.059002 | orchestrator | 2025-10-09 10:36:26.059012 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-10-09 10:36:26.059022 | orchestrator | Thursday 09 October 2025 10:35:19 +0000 (0:00:03.428) 0:00:08.756 ****** 2025-10-09 10:36:26.059032 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-10-09 10:36:26.059043 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-10-09 10:36:26.059053 | orchestrator | 2025-10-09 10:36:26.059062 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-10-09 10:36:26.059072 | orchestrator | Thursday 09 October 2025 10:35:26 +0000 (0:00:06.539) 0:00:15.296 ****** 2025-10-09 10:36:26.059082 | orchestrator | changed: [testbed-node-3] => (item=service) 2025-10-09 10:36:26.059092 | orchestrator | 2025-10-09 10:36:26.059101 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-10-09 10:36:26.059150 | orchestrator | Thursday 09 October 2025 10:35:29 +0000 (0:00:03.005) 0:00:18.302 ****** 2025-10-09 10:36:26.059161 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:36:26.059171 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-10-09 10:36:26.059180 | orchestrator | 2025-10-09 10:36:26.059226 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-10-09 10:36:26.059237 | orchestrator | Thursday 09 October 2025 10:35:32 +0000 (0:00:03.706) 0:00:22.008 ****** 2025-10-09 10:36:26.059248 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-10-09 10:36:26.059259 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-10-09 10:36:26.059270 | orchestrator | 2025-10-09 10:36:26.059281 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-10-09 10:36:26.059292 | orchestrator | Thursday 09 October 2025 10:35:39 +0000 (0:00:06.383) 0:00:28.391 ****** 2025-10-09 10:36:26.059303 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-10-09 10:36:26.059314 | orchestrator | 2025-10-09 10:36:26.059324 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:36:26.059335 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.059346 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.059358 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.059369 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.059380 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.059404 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.059415 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.059427 | orchestrator | 2025-10-09 10:36:26.059438 | orchestrator | 2025-10-09 10:36:26.059449 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:36:26.059460 | orchestrator | Thursday 09 October 2025 10:35:45 +0000 (0:00:06.542) 0:00:34.934 ****** 2025-10-09 10:36:26.059472 | orchestrator | =============================================================================== 2025-10-09 10:36:26.059483 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.54s 2025-10-09 10:36:26.059507 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.54s 2025-10-09 10:36:26.059518 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.38s 2025-10-09 10:36:26.059529 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.71s 2025-10-09 10:36:26.059540 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.43s 2025-10-09 10:36:26.059551 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.01s 2025-10-09 10:36:26.059562 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.70s 2025-10-09 10:36:26.059573 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.30s 2025-10-09 10:36:26.059584 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.02s 2025-10-09 10:36:26.059595 | orchestrator | 2025-10-09 10:36:26.059606 | orchestrator | 2025-10-09 10:36:26.059615 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-10-09 10:36:26.059625 | orchestrator | 2025-10-09 10:36:26.059635 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-10-09 10:36:26.059644 | orchestrator | Thursday 09 October 2025 10:35:02 +0000 (0:00:00.291) 0:00:00.291 ****** 2025-10-09 10:36:26.059654 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.059664 | orchestrator | 2025-10-09 10:36:26.059673 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-10-09 10:36:26.059683 | orchestrator | Thursday 09 October 2025 10:35:04 +0000 (0:00:02.009) 0:00:02.301 ****** 2025-10-09 10:36:26.059799 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.059810 | orchestrator | 2025-10-09 10:36:26.059820 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-10-09 10:36:26.059829 | orchestrator | Thursday 09 October 2025 10:35:05 +0000 (0:00:01.129) 0:00:03.430 ****** 2025-10-09 10:36:26.059839 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.059849 | orchestrator | 2025-10-09 10:36:26.059859 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-10-09 10:36:26.059869 | orchestrator | Thursday 09 October 2025 10:35:06 +0000 (0:00:01.059) 0:00:04.490 ****** 2025-10-09 10:36:26.059878 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.059888 | orchestrator | 2025-10-09 10:36:26.059898 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-10-09 10:36:26.059908 | orchestrator | Thursday 09 October 2025 10:35:08 +0000 (0:00:02.417) 0:00:06.907 ****** 2025-10-09 10:36:26.059917 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.059927 | orchestrator | 2025-10-09 10:36:26.059937 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-10-09 10:36:26.059946 | orchestrator | Thursday 09 October 2025 10:35:10 +0000 (0:00:01.499) 0:00:08.407 ****** 2025-10-09 10:36:26.059956 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.059965 | orchestrator | 2025-10-09 10:36:26.059975 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-10-09 10:36:26.059985 | orchestrator | Thursday 09 October 2025 10:35:11 +0000 (0:00:01.133) 0:00:09.541 ****** 2025-10-09 10:36:26.059995 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.060004 | orchestrator | 2025-10-09 10:36:26.060014 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-10-09 10:36:26.060024 | orchestrator | Thursday 09 October 2025 10:35:13 +0000 (0:00:02.178) 0:00:11.719 ****** 2025-10-09 10:36:26.060034 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.060043 | orchestrator | 2025-10-09 10:36:26.060053 | orchestrator | TASK [Create admin user] ******************************************************* 2025-10-09 10:36:26.060063 | orchestrator | Thursday 09 October 2025 10:35:15 +0000 (0:00:01.441) 0:00:13.160 ****** 2025-10-09 10:36:26.060072 | orchestrator | changed: [testbed-manager] 2025-10-09 10:36:26.060082 | orchestrator | 2025-10-09 10:36:26.060092 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-10-09 10:36:26.060101 | orchestrator | Thursday 09 October 2025 10:36:01 +0000 (0:00:45.870) 0:00:59.030 ****** 2025-10-09 10:36:26.060119 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:36:26.060129 | orchestrator | 2025-10-09 10:36:26.060138 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-09 10:36:26.060148 | orchestrator | 2025-10-09 10:36:26.060158 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-09 10:36:26.060167 | orchestrator | Thursday 09 October 2025 10:36:01 +0000 (0:00:00.165) 0:00:59.196 ****** 2025-10-09 10:36:26.060177 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:36:26.060210 | orchestrator | 2025-10-09 10:36:26.060221 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-09 10:36:26.060231 | orchestrator | 2025-10-09 10:36:26.060240 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-09 10:36:26.060250 | orchestrator | Thursday 09 October 2025 10:36:13 +0000 (0:00:11.855) 0:01:11.051 ****** 2025-10-09 10:36:26.060260 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:36:26.060269 | orchestrator | 2025-10-09 10:36:26.060279 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-09 10:36:26.060288 | orchestrator | 2025-10-09 10:36:26.060298 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-09 10:36:26.060308 | orchestrator | Thursday 09 October 2025 10:36:24 +0000 (0:00:11.362) 0:01:22.414 ****** 2025-10-09 10:36:26.060317 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:36:26.060327 | orchestrator | 2025-10-09 10:36:26.060344 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:36:26.060355 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:36:26.060365 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.060374 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.060390 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:26.060400 | orchestrator | 2025-10-09 10:36:26.060410 | orchestrator | 2025-10-09 10:36:26.060421 | orchestrator | 2025-10-09 10:36:26.060432 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:36:26.060443 | orchestrator | Thursday 09 October 2025 10:36:25 +0000 (0:00:01.143) 0:01:23.557 ****** 2025-10-09 10:36:26.060454 | orchestrator | =============================================================================== 2025-10-09 10:36:26.060465 | orchestrator | Create admin user ------------------------------------------------------ 45.87s 2025-10-09 10:36:26.060476 | orchestrator | Restart ceph manager service ------------------------------------------- 24.36s 2025-10-09 10:36:26.060487 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 2.42s 2025-10-09 10:36:26.060497 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.18s 2025-10-09 10:36:26.060507 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.01s 2025-10-09 10:36:26.060516 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.50s 2025-10-09 10:36:26.060526 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.44s 2025-10-09 10:36:26.060536 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.13s 2025-10-09 10:36:26.060545 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.13s 2025-10-09 10:36:26.060555 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.06s 2025-10-09 10:36:26.060564 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2025-10-09 10:36:26.060574 | orchestrator | 2025-10-09 10:36:26 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:26.061862 | orchestrator | 2025-10-09 10:36:26 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:26.063032 | orchestrator | 2025-10-09 10:36:26 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:26.064565 | orchestrator | 2025-10-09 10:36:26 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:26.064847 | orchestrator | 2025-10-09 10:36:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:29.099655 | orchestrator | 2025-10-09 10:36:29 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:29.100013 | orchestrator | 2025-10-09 10:36:29 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:29.101652 | orchestrator | 2025-10-09 10:36:29 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:29.102982 | orchestrator | 2025-10-09 10:36:29 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:29.103352 | orchestrator | 2025-10-09 10:36:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:32.151979 | orchestrator | 2025-10-09 10:36:32 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:32.152750 | orchestrator | 2025-10-09 10:36:32 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:32.153973 | orchestrator | 2025-10-09 10:36:32 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:32.157431 | orchestrator | 2025-10-09 10:36:32 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:32.157886 | orchestrator | 2025-10-09 10:36:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:35.202270 | orchestrator | 2025-10-09 10:36:35 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:35.202948 | orchestrator | 2025-10-09 10:36:35 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:35.204254 | orchestrator | 2025-10-09 10:36:35 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:35.205450 | orchestrator | 2025-10-09 10:36:35 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:35.205698 | orchestrator | 2025-10-09 10:36:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:38.245913 | orchestrator | 2025-10-09 10:36:38 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:38.246504 | orchestrator | 2025-10-09 10:36:38 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:38.247986 | orchestrator | 2025-10-09 10:36:38 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:38.249389 | orchestrator | 2025-10-09 10:36:38 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:38.249471 | orchestrator | 2025-10-09 10:36:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:41.298616 | orchestrator | 2025-10-09 10:36:41 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:41.298813 | orchestrator | 2025-10-09 10:36:41 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:41.299911 | orchestrator | 2025-10-09 10:36:41 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:41.300898 | orchestrator | 2025-10-09 10:36:41 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:41.301023 | orchestrator | 2025-10-09 10:36:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:44.342709 | orchestrator | 2025-10-09 10:36:44 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:44.343054 | orchestrator | 2025-10-09 10:36:44 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:44.344271 | orchestrator | 2025-10-09 10:36:44 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:44.346473 | orchestrator | 2025-10-09 10:36:44 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:44.346522 | orchestrator | 2025-10-09 10:36:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:47.381175 | orchestrator | 2025-10-09 10:36:47 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:47.382361 | orchestrator | 2025-10-09 10:36:47 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:47.384860 | orchestrator | 2025-10-09 10:36:47 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:47.385936 | orchestrator | 2025-10-09 10:36:47 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:47.386864 | orchestrator | 2025-10-09 10:36:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:50.426242 | orchestrator | 2025-10-09 10:36:50 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:50.426782 | orchestrator | 2025-10-09 10:36:50 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:50.430150 | orchestrator | 2025-10-09 10:36:50 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:50.431219 | orchestrator | 2025-10-09 10:36:50 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:50.431367 | orchestrator | 2025-10-09 10:36:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:53.470397 | orchestrator | 2025-10-09 10:36:53 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:53.472106 | orchestrator | 2025-10-09 10:36:53 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:53.473177 | orchestrator | 2025-10-09 10:36:53 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:53.474618 | orchestrator | 2025-10-09 10:36:53 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:53.474770 | orchestrator | 2025-10-09 10:36:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:56.506517 | orchestrator | 2025-10-09 10:36:56 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:56.506968 | orchestrator | 2025-10-09 10:36:56 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:56.507967 | orchestrator | 2025-10-09 10:36:56 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:56.509115 | orchestrator | 2025-10-09 10:36:56 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:56.509393 | orchestrator | 2025-10-09 10:36:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:59.544957 | orchestrator | 2025-10-09 10:36:59 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:36:59.545601 | orchestrator | 2025-10-09 10:36:59 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:36:59.546604 | orchestrator | 2025-10-09 10:36:59 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:36:59.547805 | orchestrator | 2025-10-09 10:36:59 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:36:59.547828 | orchestrator | 2025-10-09 10:36:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:02.577791 | orchestrator | 2025-10-09 10:37:02 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:02.577917 | orchestrator | 2025-10-09 10:37:02 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:02.581104 | orchestrator | 2025-10-09 10:37:02 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:02.581148 | orchestrator | 2025-10-09 10:37:02 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:02.581162 | orchestrator | 2025-10-09 10:37:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:05.618613 | orchestrator | 2025-10-09 10:37:05 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:05.619153 | orchestrator | 2025-10-09 10:37:05 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:05.620988 | orchestrator | 2025-10-09 10:37:05 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:05.622234 | orchestrator | 2025-10-09 10:37:05 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:05.622306 | orchestrator | 2025-10-09 10:37:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:08.655602 | orchestrator | 2025-10-09 10:37:08 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:08.656087 | orchestrator | 2025-10-09 10:37:08 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:08.657127 | orchestrator | 2025-10-09 10:37:08 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:08.658166 | orchestrator | 2025-10-09 10:37:08 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:08.658390 | orchestrator | 2025-10-09 10:37:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:11.706339 | orchestrator | 2025-10-09 10:37:11 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:11.707249 | orchestrator | 2025-10-09 10:37:11 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:11.707914 | orchestrator | 2025-10-09 10:37:11 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:11.712731 | orchestrator | 2025-10-09 10:37:11 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:11.713048 | orchestrator | 2025-10-09 10:37:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:14.745387 | orchestrator | 2025-10-09 10:37:14 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:14.746823 | orchestrator | 2025-10-09 10:37:14 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:14.749340 | orchestrator | 2025-10-09 10:37:14 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:14.750484 | orchestrator | 2025-10-09 10:37:14 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:14.750519 | orchestrator | 2025-10-09 10:37:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:17.793247 | orchestrator | 2025-10-09 10:37:17 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:17.794691 | orchestrator | 2025-10-09 10:37:17 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:17.797333 | orchestrator | 2025-10-09 10:37:17 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:17.799216 | orchestrator | 2025-10-09 10:37:17 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:17.799242 | orchestrator | 2025-10-09 10:37:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:20.854964 | orchestrator | 2025-10-09 10:37:20 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:20.857021 | orchestrator | 2025-10-09 10:37:20 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:20.858456 | orchestrator | 2025-10-09 10:37:20 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:20.860038 | orchestrator | 2025-10-09 10:37:20 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:20.860069 | orchestrator | 2025-10-09 10:37:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:23.920380 | orchestrator | 2025-10-09 10:37:23 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:23.922336 | orchestrator | 2025-10-09 10:37:23 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:23.924501 | orchestrator | 2025-10-09 10:37:23 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:23.926088 | orchestrator | 2025-10-09 10:37:23 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:23.926115 | orchestrator | 2025-10-09 10:37:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:26.971654 | orchestrator | 2025-10-09 10:37:26 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:26.972797 | orchestrator | 2025-10-09 10:37:26 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:26.975075 | orchestrator | 2025-10-09 10:37:26 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:26.976414 | orchestrator | 2025-10-09 10:37:26 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:26.976708 | orchestrator | 2025-10-09 10:37:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:30.010118 | orchestrator | 2025-10-09 10:37:30 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:30.021683 | orchestrator | 2025-10-09 10:37:30 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:30.021724 | orchestrator | 2025-10-09 10:37:30 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:30.021737 | orchestrator | 2025-10-09 10:37:30 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:30.021749 | orchestrator | 2025-10-09 10:37:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:33.120940 | orchestrator | 2025-10-09 10:37:33 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:33.122729 | orchestrator | 2025-10-09 10:37:33 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:33.123734 | orchestrator | 2025-10-09 10:37:33 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:33.125242 | orchestrator | 2025-10-09 10:37:33 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:33.125264 | orchestrator | 2025-10-09 10:37:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:36.156114 | orchestrator | 2025-10-09 10:37:36 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:36.157299 | orchestrator | 2025-10-09 10:37:36 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:36.160008 | orchestrator | 2025-10-09 10:37:36 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:36.160890 | orchestrator | 2025-10-09 10:37:36 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:36.160994 | orchestrator | 2025-10-09 10:37:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:39.229686 | orchestrator | 2025-10-09 10:37:39 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:39.230648 | orchestrator | 2025-10-09 10:37:39 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:39.232122 | orchestrator | 2025-10-09 10:37:39 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:39.233703 | orchestrator | 2025-10-09 10:37:39 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:39.233727 | orchestrator | 2025-10-09 10:37:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:42.271732 | orchestrator | 2025-10-09 10:37:42 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:42.274436 | orchestrator | 2025-10-09 10:37:42 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:42.276125 | orchestrator | 2025-10-09 10:37:42 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:42.277508 | orchestrator | 2025-10-09 10:37:42 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:42.277532 | orchestrator | 2025-10-09 10:37:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:45.326864 | orchestrator | 2025-10-09 10:37:45 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:45.329816 | orchestrator | 2025-10-09 10:37:45 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:45.330893 | orchestrator | 2025-10-09 10:37:45 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:45.332481 | orchestrator | 2025-10-09 10:37:45 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:45.332501 | orchestrator | 2025-10-09 10:37:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:48.390593 | orchestrator | 2025-10-09 10:37:48 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:48.392445 | orchestrator | 2025-10-09 10:37:48 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:48.394068 | orchestrator | 2025-10-09 10:37:48 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:48.397629 | orchestrator | 2025-10-09 10:37:48 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:48.397709 | orchestrator | 2025-10-09 10:37:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:51.438438 | orchestrator | 2025-10-09 10:37:51 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:51.439651 | orchestrator | 2025-10-09 10:37:51 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:51.440568 | orchestrator | 2025-10-09 10:37:51 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:51.441365 | orchestrator | 2025-10-09 10:37:51 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:51.441389 | orchestrator | 2025-10-09 10:37:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:54.477831 | orchestrator | 2025-10-09 10:37:54 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:54.478416 | orchestrator | 2025-10-09 10:37:54 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:54.478932 | orchestrator | 2025-10-09 10:37:54 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:54.479718 | orchestrator | 2025-10-09 10:37:54 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:54.479740 | orchestrator | 2025-10-09 10:37:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:57.512018 | orchestrator | 2025-10-09 10:37:57 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:37:57.512835 | orchestrator | 2025-10-09 10:37:57 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:37:57.513764 | orchestrator | 2025-10-09 10:37:57 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:37:57.518264 | orchestrator | 2025-10-09 10:37:57 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:37:57.518580 | orchestrator | 2025-10-09 10:37:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:00.555816 | orchestrator | 2025-10-09 10:38:00 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:00.556389 | orchestrator | 2025-10-09 10:38:00 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:00.557237 | orchestrator | 2025-10-09 10:38:00 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:00.558449 | orchestrator | 2025-10-09 10:38:00 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:00.559133 | orchestrator | 2025-10-09 10:38:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:03.604011 | orchestrator | 2025-10-09 10:38:03 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:03.605338 | orchestrator | 2025-10-09 10:38:03 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:03.605957 | orchestrator | 2025-10-09 10:38:03 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:03.606724 | orchestrator | 2025-10-09 10:38:03 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:03.606762 | orchestrator | 2025-10-09 10:38:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:06.642944 | orchestrator | 2025-10-09 10:38:06 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:06.643214 | orchestrator | 2025-10-09 10:38:06 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:06.643902 | orchestrator | 2025-10-09 10:38:06 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:06.644626 | orchestrator | 2025-10-09 10:38:06 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:06.644723 | orchestrator | 2025-10-09 10:38:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:09.685381 | orchestrator | 2025-10-09 10:38:09 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:09.685706 | orchestrator | 2025-10-09 10:38:09 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:09.688252 | orchestrator | 2025-10-09 10:38:09 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:09.690883 | orchestrator | 2025-10-09 10:38:09 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:09.690906 | orchestrator | 2025-10-09 10:38:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:12.759690 | orchestrator | 2025-10-09 10:38:12 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:12.760004 | orchestrator | 2025-10-09 10:38:12 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:12.765205 | orchestrator | 2025-10-09 10:38:12 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:12.770384 | orchestrator | 2025-10-09 10:38:12 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:12.770423 | orchestrator | 2025-10-09 10:38:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:15.810680 | orchestrator | 2025-10-09 10:38:15 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:15.811621 | orchestrator | 2025-10-09 10:38:15 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:15.812889 | orchestrator | 2025-10-09 10:38:15 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:15.814292 | orchestrator | 2025-10-09 10:38:15 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:15.814317 | orchestrator | 2025-10-09 10:38:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:18.862244 | orchestrator | 2025-10-09 10:38:18 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:18.862469 | orchestrator | 2025-10-09 10:38:18 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:18.863376 | orchestrator | 2025-10-09 10:38:18 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:18.865642 | orchestrator | 2025-10-09 10:38:18 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:18.865665 | orchestrator | 2025-10-09 10:38:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:21.901244 | orchestrator | 2025-10-09 10:38:21 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:21.901893 | orchestrator | 2025-10-09 10:38:21 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:21.903265 | orchestrator | 2025-10-09 10:38:21 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:21.904675 | orchestrator | 2025-10-09 10:38:21 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:21.904697 | orchestrator | 2025-10-09 10:38:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:24.952257 | orchestrator | 2025-10-09 10:38:24 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:24.953445 | orchestrator | 2025-10-09 10:38:24 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:24.954376 | orchestrator | 2025-10-09 10:38:24 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:24.955353 | orchestrator | 2025-10-09 10:38:24 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:24.955684 | orchestrator | 2025-10-09 10:38:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:28.013921 | orchestrator | 2025-10-09 10:38:28 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:28.014945 | orchestrator | 2025-10-09 10:38:28 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:28.016957 | orchestrator | 2025-10-09 10:38:28 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:28.018574 | orchestrator | 2025-10-09 10:38:28 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:28.018804 | orchestrator | 2025-10-09 10:38:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:31.051533 | orchestrator | 2025-10-09 10:38:31 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:31.051800 | orchestrator | 2025-10-09 10:38:31 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:31.052875 | orchestrator | 2025-10-09 10:38:31 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:31.053659 | orchestrator | 2025-10-09 10:38:31 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:31.053682 | orchestrator | 2025-10-09 10:38:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:34.101500 | orchestrator | 2025-10-09 10:38:34 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state STARTED 2025-10-09 10:38:34.103507 | orchestrator | 2025-10-09 10:38:34 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:34.105707 | orchestrator | 2025-10-09 10:38:34 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:34.107557 | orchestrator | 2025-10-09 10:38:34 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:34.107592 | orchestrator | 2025-10-09 10:38:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:37.154737 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 822c24d3-91b3-47e8-b366-95813a9cb7af is in state SUCCESS 2025-10-09 10:38:37.156829 | orchestrator | 2025-10-09 10:38:37.156871 | orchestrator | 2025-10-09 10:38:37.156883 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:38:37.156893 | orchestrator | 2025-10-09 10:38:37.156903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:38:37.156914 | orchestrator | Thursday 09 October 2025 10:35:11 +0000 (0:00:00.455) 0:00:00.455 ****** 2025-10-09 10:38:37.156924 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:38:37.156935 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:38:37.156945 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:38:37.156955 | orchestrator | 2025-10-09 10:38:37.156965 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:38:37.156974 | orchestrator | Thursday 09 October 2025 10:35:12 +0000 (0:00:00.571) 0:00:01.026 ****** 2025-10-09 10:38:37.156984 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-10-09 10:38:37.156994 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-10-09 10:38:37.157004 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-10-09 10:38:37.157013 | orchestrator | 2025-10-09 10:38:37.157023 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-10-09 10:38:37.157035 | orchestrator | 2025-10-09 10:38:37.157051 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-09 10:38:37.157067 | orchestrator | Thursday 09 October 2025 10:35:13 +0000 (0:00:00.831) 0:00:01.858 ****** 2025-10-09 10:38:37.157082 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:38:37.157100 | orchestrator | 2025-10-09 10:38:37.157115 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-10-09 10:38:37.157167 | orchestrator | Thursday 09 October 2025 10:35:14 +0000 (0:00:01.326) 0:00:03.184 ****** 2025-10-09 10:38:37.157244 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-10-09 10:38:37.157255 | orchestrator | 2025-10-09 10:38:37.157265 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-10-09 10:38:37.157274 | orchestrator | Thursday 09 October 2025 10:35:18 +0000 (0:00:04.158) 0:00:07.343 ****** 2025-10-09 10:38:37.157284 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-10-09 10:38:37.157387 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-10-09 10:38:37.157398 | orchestrator | 2025-10-09 10:38:37.157408 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-10-09 10:38:37.157419 | orchestrator | Thursday 09 October 2025 10:35:26 +0000 (0:00:07.453) 0:00:14.796 ****** 2025-10-09 10:38:37.157434 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:38:37.157453 | orchestrator | 2025-10-09 10:38:37.157496 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-10-09 10:38:37.157511 | orchestrator | Thursday 09 October 2025 10:35:29 +0000 (0:00:03.028) 0:00:17.824 ****** 2025-10-09 10:38:37.157530 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:38:37.157548 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-10-09 10:38:37.157565 | orchestrator | 2025-10-09 10:38:37.157582 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-10-09 10:38:37.157599 | orchestrator | Thursday 09 October 2025 10:35:32 +0000 (0:00:03.585) 0:00:21.410 ****** 2025-10-09 10:38:37.157610 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:38:37.157619 | orchestrator | 2025-10-09 10:38:37.157629 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-10-09 10:38:37.157639 | orchestrator | Thursday 09 October 2025 10:35:36 +0000 (0:00:03.546) 0:00:24.956 ****** 2025-10-09 10:38:37.157683 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-10-09 10:38:37.157695 | orchestrator | 2025-10-09 10:38:37.157704 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-10-09 10:38:37.157714 | orchestrator | Thursday 09 October 2025 10:35:41 +0000 (0:00:04.637) 0:00:29.593 ****** 2025-10-09 10:38:37.157748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.157776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.157794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.157805 | orchestrator | 2025-10-09 10:38:37.157815 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-09 10:38:37.157825 | orchestrator | Thursday 09 October 2025 10:35:47 +0000 (0:00:06.111) 0:00:35.704 ****** 2025-10-09 10:38:37.157840 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:38:37.157850 | orchestrator | 2025-10-09 10:38:37.157861 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-10-09 10:38:37.157890 | orchestrator | Thursday 09 October 2025 10:35:47 +0000 (0:00:00.611) 0:00:36.316 ****** 2025-10-09 10:38:37.157907 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:37.157923 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:38:37.157937 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:38:37.157947 | orchestrator | 2025-10-09 10:38:37.157956 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-10-09 10:38:37.157966 | orchestrator | Thursday 09 October 2025 10:35:51 +0000 (0:00:03.756) 0:00:40.072 ****** 2025-10-09 10:38:37.157976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:38:37.157988 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:38:37.158004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:38:37.158090 | orchestrator | 2025-10-09 10:38:37.158105 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-10-09 10:38:37.158115 | orchestrator | Thursday 09 October 2025 10:35:53 +0000 (0:00:01.546) 0:00:41.619 ****** 2025-10-09 10:38:37.158124 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:38:37.158133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:38:37.158143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:38:37.158152 | orchestrator | 2025-10-09 10:38:37.158162 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-10-09 10:38:37.158195 | orchestrator | Thursday 09 October 2025 10:35:54 +0000 (0:00:01.168) 0:00:42.787 ****** 2025-10-09 10:38:37.158207 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:38:37.158216 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:38:37.158225 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:38:37.158235 | orchestrator | 2025-10-09 10:38:37.158245 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-10-09 10:38:37.158254 | orchestrator | Thursday 09 October 2025 10:35:54 +0000 (0:00:00.630) 0:00:43.418 ****** 2025-10-09 10:38:37.158263 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.158273 | orchestrator | 2025-10-09 10:38:37.158282 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-10-09 10:38:37.158292 | orchestrator | Thursday 09 October 2025 10:35:55 +0000 (0:00:00.415) 0:00:43.833 ****** 2025-10-09 10:38:37.158301 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.158310 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.158320 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.158329 | orchestrator | 2025-10-09 10:38:37.158339 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-09 10:38:37.158348 | orchestrator | Thursday 09 October 2025 10:35:55 +0000 (0:00:00.309) 0:00:44.143 ****** 2025-10-09 10:38:37.158357 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:38:37.158367 | orchestrator | 2025-10-09 10:38:37.158376 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-10-09 10:38:37.158386 | orchestrator | Thursday 09 October 2025 10:35:56 +0000 (0:00:00.560) 0:00:44.704 ****** 2025-10-09 10:38:37.158412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.158433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.158449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.158465 | orchestrator | 2025-10-09 10:38:37.158476 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-10-09 10:38:37.158485 | orchestrator | Thursday 09 October 2025 10:36:01 +0000 (0:00:05.108) 0:00:49.812 ****** 2025-10-09 10:38:37.158504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:38:37.158516 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.158531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:38:37.158547 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.158565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:38:37.158575 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.158585 | orchestrator | 2025-10-09 10:38:37.158595 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-10-09 10:38:37.158604 | orchestrator | Thursday 09 October 2025 10:36:08 +0000 (0:00:06.791) 0:00:56.603 ****** 2025-10-09 10:38:37.158619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:38:37.158635 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.158651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:38:37.158662 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.158672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:38:37.158682 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.158692 | orchestrator | 2025-10-09 10:38:37.158702 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-10-09 10:38:37.158711 | orchestrator | Thursday 09 October 2025 10:36:14 +0000 (0:00:06.228) 0:01:02.832 ****** 2025-10-09 10:38:37.158726 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.158736 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.158745 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.158755 | orchestrator | 2025-10-09 10:38:37.158775 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-10-09 10:38:37.158784 | orchestrator | Thursday 09 October 2025 10:36:19 +0000 (0:00:04.898) 0:01:07.730 ****** 2025-10-09 10:38:37.158801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.158813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.158828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.158844 | orchestrator | 2025-10-09 10:38:37.158854 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-10-09 10:38:37.158864 | orchestrator | Thursday 09 October 2025 10:36:24 +0000 (0:00:05.544) 0:01:13.275 ****** 2025-10-09 10:38:37.158873 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:37.158883 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:38:37.158892 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:38:37.158902 | orchestrator | 2025-10-09 10:38:37.158911 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-10-09 10:38:37.158925 | orchestrator | Thursday 09 October 2025 10:36:34 +0000 (0:00:09.331) 0:01:22.606 ****** 2025-10-09 10:38:37.158935 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.158945 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.158954 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.158963 | orchestrator | 2025-10-09 10:38:37.158973 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-10-09 10:38:37.158982 | orchestrator | Thursday 09 October 2025 10:36:40 +0000 (0:00:06.524) 0:01:29.131 ****** 2025-10-09 10:38:37.158992 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.159001 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.159011 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.159020 | orchestrator | 2025-10-09 10:38:37.159030 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-10-09 10:38:37.159040 | orchestrator | Thursday 09 October 2025 10:36:51 +0000 (0:00:11.167) 0:01:40.298 ****** 2025-10-09 10:38:37.159049 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.159058 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.159068 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.159077 | orchestrator | 2025-10-09 10:38:37.159087 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-10-09 10:38:37.159096 | orchestrator | Thursday 09 October 2025 10:36:58 +0000 (0:00:06.770) 0:01:47.068 ****** 2025-10-09 10:38:37.159106 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.159115 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.159125 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.159134 | orchestrator | 2025-10-09 10:38:37.159144 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-10-09 10:38:37.159153 | orchestrator | Thursday 09 October 2025 10:37:05 +0000 (0:00:07.370) 0:01:54.438 ****** 2025-10-09 10:38:37.159168 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.159198 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.159208 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.159217 | orchestrator | 2025-10-09 10:38:37.159227 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-10-09 10:38:37.159236 | orchestrator | Thursday 09 October 2025 10:37:06 +0000 (0:00:00.325) 0:01:54.764 ****** 2025-10-09 10:38:37.159246 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-09 10:38:37.159255 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.159265 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-09 10:38:37.159275 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.159284 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-09 10:38:37.159293 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.159303 | orchestrator | 2025-10-09 10:38:37.159312 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-10-09 10:38:37.159322 | orchestrator | Thursday 09 October 2025 10:37:10 +0000 (0:00:03.978) 0:01:58.742 ****** 2025-10-09 10:38:37.159336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.159356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.159377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:38:37.159388 | orchestrator | 2025-10-09 10:38:37.159398 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-09 10:38:37.159407 | orchestrator | Thursday 09 October 2025 10:37:14 +0000 (0:00:04.415) 0:02:03.158 ****** 2025-10-09 10:38:37.159417 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:38:37.159426 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:38:37.159435 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:38:37.159445 | orchestrator | 2025-10-09 10:38:37.159454 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-10-09 10:38:37.159464 | orchestrator | Thursday 09 October 2025 10:37:14 +0000 (0:00:00.287) 0:02:03.445 ****** 2025-10-09 10:38:37.159473 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:37.159483 | orchestrator | 2025-10-09 10:38:37.159492 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-10-09 10:38:37.159501 | orchestrator | Thursday 09 October 2025 10:37:17 +0000 (0:00:02.217) 0:02:05.663 ****** 2025-10-09 10:38:37.159511 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:37.159520 | orchestrator | 2025-10-09 10:38:37.159530 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-10-09 10:38:37.159539 | orchestrator | Thursday 09 October 2025 10:37:19 +0000 (0:00:02.346) 0:02:08.009 ****** 2025-10-09 10:38:37.159549 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:37.159558 | orchestrator | 2025-10-09 10:38:37.159567 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-10-09 10:38:37.159582 | orchestrator | Thursday 09 October 2025 10:37:21 +0000 (0:00:02.132) 0:02:10.142 ****** 2025-10-09 10:38:37.159597 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:37.159607 | orchestrator | 2025-10-09 10:38:37.159616 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-10-09 10:38:37.159626 | orchestrator | Thursday 09 October 2025 10:37:48 +0000 (0:00:27.226) 0:02:37.368 ****** 2025-10-09 10:38:37.159635 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:37.159645 | orchestrator | 2025-10-09 10:38:37.159654 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-09 10:38:37.159664 | orchestrator | Thursday 09 October 2025 10:37:51 +0000 (0:00:02.362) 0:02:39.731 ****** 2025-10-09 10:38:37.159673 | orchestrator | 2025-10-09 10:38:37.159683 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-09 10:38:37.159692 | orchestrator | Thursday 09 October 2025 10:37:51 +0000 (0:00:00.140) 0:02:39.872 ****** 2025-10-09 10:38:37.159701 | orchestrator | 2025-10-09 10:38:37.159711 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-09 10:38:37.159720 | orchestrator | Thursday 09 October 2025 10:37:51 +0000 (0:00:00.098) 0:02:39.970 ****** 2025-10-09 10:38:37.159730 | orchestrator | 2025-10-09 10:38:37.159739 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-10-09 10:38:37.159749 | orchestrator | Thursday 09 October 2025 10:37:51 +0000 (0:00:00.143) 0:02:40.114 ****** 2025-10-09 10:38:37.159758 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:37.159768 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:38:37.159777 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:38:37.159787 | orchestrator | 2025-10-09 10:38:37.159796 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:38:37.159807 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-09 10:38:37.159818 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:38:37.159827 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:38:37.159837 | orchestrator | 2025-10-09 10:38:37.159846 | orchestrator | 2025-10-09 10:38:37.159856 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:38:37.159865 | orchestrator | Thursday 09 October 2025 10:38:34 +0000 (0:00:42.955) 0:03:23.069 ****** 2025-10-09 10:38:37.159875 | orchestrator | =============================================================================== 2025-10-09 10:38:37.159884 | orchestrator | glance : Restart glance-api container ---------------------------------- 42.96s 2025-10-09 10:38:37.159894 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.23s 2025-10-09 10:38:37.159903 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 11.17s 2025-10-09 10:38:37.159913 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.33s 2025-10-09 10:38:37.159922 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.45s 2025-10-09 10:38:37.159932 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 7.37s 2025-10-09 10:38:37.159941 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.79s 2025-10-09 10:38:37.159951 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.77s 2025-10-09 10:38:37.159965 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.52s 2025-10-09 10:38:37.159975 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.23s 2025-10-09 10:38:37.159984 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.11s 2025-10-09 10:38:37.159994 | orchestrator | glance : Copying over config.json files for services -------------------- 5.54s 2025-10-09 10:38:37.160008 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.11s 2025-10-09 10:38:37.160018 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.90s 2025-10-09 10:38:37.160027 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.64s 2025-10-09 10:38:37.160037 | orchestrator | glance : Check glance containers ---------------------------------------- 4.42s 2025-10-09 10:38:37.160046 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.16s 2025-10-09 10:38:37.160055 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.98s 2025-10-09 10:38:37.160065 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.76s 2025-10-09 10:38:37.160074 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.59s 2025-10-09 10:38:37.160084 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:38:37.160511 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:37.162606 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:37.164561 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:37.164594 | orchestrator | 2025-10-09 10:38:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:40.216165 | orchestrator | 2025-10-09 10:38:40 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:38:40.216282 | orchestrator | 2025-10-09 10:38:40 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:40.216703 | orchestrator | 2025-10-09 10:38:40 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:40.217554 | orchestrator | 2025-10-09 10:38:40 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:40.217718 | orchestrator | 2025-10-09 10:38:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:43.253963 | orchestrator | 2025-10-09 10:38:43 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:38:43.257696 | orchestrator | 2025-10-09 10:38:43 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:43.259158 | orchestrator | 2025-10-09 10:38:43 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:43.260686 | orchestrator | 2025-10-09 10:38:43 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:43.261339 | orchestrator | 2025-10-09 10:38:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:46.292736 | orchestrator | 2025-10-09 10:38:46 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:38:46.294479 | orchestrator | 2025-10-09 10:38:46 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:46.296402 | orchestrator | 2025-10-09 10:38:46 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:46.297858 | orchestrator | 2025-10-09 10:38:46 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:46.298297 | orchestrator | 2025-10-09 10:38:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:49.336681 | orchestrator | 2025-10-09 10:38:49 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:38:49.338646 | orchestrator | 2025-10-09 10:38:49 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:49.342417 | orchestrator | 2025-10-09 10:38:49 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:49.345157 | orchestrator | 2025-10-09 10:38:49 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:49.345168 | orchestrator | 2025-10-09 10:38:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:52.392712 | orchestrator | 2025-10-09 10:38:52 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:38:52.393968 | orchestrator | 2025-10-09 10:38:52 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:52.395814 | orchestrator | 2025-10-09 10:38:52 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:52.397126 | orchestrator | 2025-10-09 10:38:52 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:52.397156 | orchestrator | 2025-10-09 10:38:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:55.449011 | orchestrator | 2025-10-09 10:38:55 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:38:55.450600 | orchestrator | 2025-10-09 10:38:55 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:55.453085 | orchestrator | 2025-10-09 10:38:55 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:55.455245 | orchestrator | 2025-10-09 10:38:55 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:55.455546 | orchestrator | 2025-10-09 10:38:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:58.500438 | orchestrator | 2025-10-09 10:38:58 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:38:58.500759 | orchestrator | 2025-10-09 10:38:58 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:38:58.501879 | orchestrator | 2025-10-09 10:38:58 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:38:58.502967 | orchestrator | 2025-10-09 10:38:58 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state STARTED 2025-10-09 10:38:58.503271 | orchestrator | 2025-10-09 10:38:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:01.551556 | orchestrator | 2025-10-09 10:39:01 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:01.555273 | orchestrator | 2025-10-09 10:39:01 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:01.557683 | orchestrator | 2025-10-09 10:39:01 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:01.563985 | orchestrator | 2025-10-09 10:39:01 | INFO  | Task 1d3859e8-8fd8-4c37-a20a-1fa2d2f7f140 is in state SUCCESS 2025-10-09 10:39:01.564825 | orchestrator | 2025-10-09 10:39:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:01.566918 | orchestrator | 2025-10-09 10:39:01.566952 | orchestrator | 2025-10-09 10:39:01.566964 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:39:01.566975 | orchestrator | 2025-10-09 10:39:01.566986 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:39:01.566998 | orchestrator | Thursday 09 October 2025 10:35:02 +0000 (0:00:00.336) 0:00:00.336 ****** 2025-10-09 10:39:01.567009 | orchestrator | ok: [testbed-manager] 2025-10-09 10:39:01.567021 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:39:01.567032 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:39:01.567043 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:39:01.567054 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:39:01.567064 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:39:01.567075 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:39:01.567145 | orchestrator | 2025-10-09 10:39:01.567159 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:39:01.567192 | orchestrator | Thursday 09 October 2025 10:35:03 +0000 (0:00:00.940) 0:00:01.277 ****** 2025-10-09 10:39:01.567205 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-10-09 10:39:01.567216 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-10-09 10:39:01.567227 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-10-09 10:39:01.567238 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-10-09 10:39:01.567248 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-10-09 10:39:01.567259 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-10-09 10:39:01.567270 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-10-09 10:39:01.567282 | orchestrator | 2025-10-09 10:39:01.567293 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-10-09 10:39:01.567303 | orchestrator | 2025-10-09 10:39:01.567314 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-10-09 10:39:01.567325 | orchestrator | Thursday 09 October 2025 10:35:04 +0000 (0:00:00.760) 0:00:02.038 ****** 2025-10-09 10:39:01.567338 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:39:01.567487 | orchestrator | 2025-10-09 10:39:01.567499 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-10-09 10:39:01.567512 | orchestrator | Thursday 09 October 2025 10:35:05 +0000 (0:00:01.796) 0:00:03.834 ****** 2025-10-09 10:39:01.567570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.567592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.567606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.567621 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:39:01.567665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.567682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.567696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.567710 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.567729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.567743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.567757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.567771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.567800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.567814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.567829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.567864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.567879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.567891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.567903 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.567929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.567942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.567955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.567978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.567996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568098 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:39:01.568132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568145 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568156 | orchestrator | 2025-10-09 10:39:01.568168 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-10-09 10:39:01.568206 | orchestrator | Thursday 09 October 2025 10:35:09 +0000 (0:00:03.988) 0:00:07.823 ****** 2025-10-09 10:39:01.568218 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:39:01.568229 | orchestrator | 2025-10-09 10:39:01.568240 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-10-09 10:39:01.568251 | orchestrator | Thursday 09 October 2025 10:35:11 +0000 (0:00:01.802) 0:00:09.625 ****** 2025-10-09 10:39:01.568268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.568282 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:39:01.568301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.568319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.568331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.568343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.568354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.568382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568400 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.568411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568452 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.568539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568552 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:39:01.568564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.568609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.569258 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.569281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.569293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.569305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.569316 | orchestrator | 2025-10-09 10:39:01.569328 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-10-09 10:39:01.569340 | orchestrator | Thursday 09 October 2025 10:35:18 +0000 (0:00:06.762) 0:00:16.388 ****** 2025-10-09 10:39:01.569359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.569403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.569501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569515 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:39:01.569527 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.569559 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.569582 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:39:01.569596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.569680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.569729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.569754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569767 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.569780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.569810 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:39:01.569822 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.569836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.569850 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.569863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.569892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.569907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.569921 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.569934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.569948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.569967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.569980 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.569994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.570007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570093 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.570106 | orchestrator | 2025-10-09 10:39:01.570124 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-10-09 10:39:01.570139 | orchestrator | Thursday 09 October 2025 10:35:20 +0000 (0:00:01.821) 0:00:18.209 ****** 2025-10-09 10:39:01.570153 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:39:01.570233 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.570250 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:39:01.570292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570303 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:39:01.570315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.570331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570384 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.570395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.570414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570465 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.570476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.570487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:39:01.570557 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.570568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.570584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570607 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.570618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.570635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570665 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.570676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:39:01.570687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:39:01.570715 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.570726 | orchestrator | 2025-10-09 10:39:01.570737 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-10-09 10:39:01.570748 | orchestrator | Thursday 09 October 2025 10:35:22 +0000 (0:00:02.343) 0:00:20.552 ****** 2025-10-09 10:39:01.570759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.570771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:39:01.570787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.570802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.570813 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.570823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.570837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.570848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.570858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.570868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.570891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.570901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.570911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.570922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.570977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.570989 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.570999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.571056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.571071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.571082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.571092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.571102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.571117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.571128 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:39:01.571151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.571161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.571189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.571200 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.571214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.571225 | orchestrator | 2025-10-09 10:39:01.571235 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-10-09 10:39:01.571245 | orchestrator | Thursday 09 October 2025 10:35:28 +0000 (0:00:06.202) 0:00:26.755 ****** 2025-10-09 10:39:01.571255 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:39:01.571265 | orchestrator | 2025-10-09 10:39:01.571275 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-10-09 10:39:01.571284 | orchestrator | Thursday 09 October 2025 10:35:29 +0000 (0:00:01.136) 0:00:27.892 ****** 2025-10-09 10:39:01.571295 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098590, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1424944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571312 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098590, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1424944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571328 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098590, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1424944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571339 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098590, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1424944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571349 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098590, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1424944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571359 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098800, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1780736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571374 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098800, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1780736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571384 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098590, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1424944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571400 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098800, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1780736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571415 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098800, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1780736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571425 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098800, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1780736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571435 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098590, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1424944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.571445 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098545, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571459 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098545, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571475 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098800, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1780736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571485 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098545, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571501 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098613, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1466036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571511 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098545, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571521 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098613, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1466036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571531 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098545, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571546 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098545, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571562 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098526, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.12674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571572 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098613, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1466036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571588 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098613, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1466036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571598 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098594, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.143152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571608 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098526, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.12674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571618 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098613, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1466036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571633 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098613, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1466036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571649 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098800, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1780736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.571659 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098526, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.12674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571675 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098594, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.143152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571686 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098526, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.12674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571696 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098526, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.12674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571706 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098526, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.12674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571729 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098610, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571740 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098594, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.143152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571750 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098610, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571765 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098594, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.143152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571775 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098594, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.143152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571785 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098594, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.143152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571796 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098610, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571815 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098596, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1438746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571826 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098596, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1438746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571836 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098610, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571850 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098610, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571861 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098596, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1438746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571871 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098585, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571881 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098585, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571900 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098610, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571910 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098585, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571920 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098545, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.571936 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098795, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571947 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098795, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571957 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098795, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571967 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098596, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1438746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571986 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098596, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1438746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.571997 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098596, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1438746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572007 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098519, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572017 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098519, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572033 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098585, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572043 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098519, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572059 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098585, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572073 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098585, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572083 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098824, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1802754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572094 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098795, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572104 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098824, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1802754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572119 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098824, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1802754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572129 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098519, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572145 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098793, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572159 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098613, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1466036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.572183 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098824, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1802754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572193 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098795, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572203 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098529, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1306977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572440 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098795, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572455 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098793, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572472 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098524, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572487 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098793, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572498 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098793, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572508 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098519, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572518 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098529, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1306977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572534 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098519, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572550 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098607, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572560 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098529, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1306977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572575 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098529, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1306977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572586 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098526, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.12674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.572596 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098524, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572606 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098824, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1802754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572621 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098824, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1802754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572640 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098524, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572650 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098524, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572665 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098602, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1443353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572675 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098607, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572685 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098793, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572696 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098607, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572711 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098793, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572727 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098607, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572737 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098602, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1443353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572751 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098602, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1443353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572762 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098819, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.180086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572772 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.572782 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098819, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.180086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572792 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.572803 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098529, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1306977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572823 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098819, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.180086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572833 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.572843 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098529, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1306977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572853 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098594, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.143152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.572863 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098524, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572904 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098602, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1443353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572915 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098524, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572925 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098819, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.180086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572941 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.572956 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098607, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572966 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098607, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572977 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098602, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1443353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.572991 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098602, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1443353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.573001 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098819, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.180086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.573011 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.573022 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098610, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573037 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098819, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.180086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:39:01.573048 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.573062 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098596, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1438746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573075 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098585, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1414046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573087 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098795, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573102 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098519, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573114 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098824, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1802754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573126 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098793, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1773682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573143 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098529, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1306977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573160 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098524, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1253598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573191 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098607, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1452909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573202 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098602, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1443353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573218 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098819, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.180086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:39:01.573230 | orchestrator | 2025-10-09 10:39:01.573241 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-10-09 10:39:01.573252 | orchestrator | Thursday 09 October 2025 10:36:01 +0000 (0:00:31.144) 0:00:59.036 ****** 2025-10-09 10:39:01.573264 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:39:01.573275 | orchestrator | 2025-10-09 10:39:01.573286 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-10-09 10:39:01.573297 | orchestrator | Thursday 09 October 2025 10:36:02 +0000 (0:00:01.011) 0:01:00.048 ****** 2025-10-09 10:39:01.573309 | orchestrator | [WARNING]: Skipped 2025-10-09 10:39:01.573320 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573332 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-10-09 10:39:01.573348 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573360 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-10-09 10:39:01.573371 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:39:01.573382 | orchestrator | [WARNING]: Skipped 2025-10-09 10:39:01.573393 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573404 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-10-09 10:39:01.573416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573426 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-10-09 10:39:01.573436 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:39:01.573446 | orchestrator | [WARNING]: Skipped 2025-10-09 10:39:01.573456 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573465 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-10-09 10:39:01.573475 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573484 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-10-09 10:39:01.573494 | orchestrator | [WARNING]: Skipped 2025-10-09 10:39:01.573504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573513 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-10-09 10:39:01.573523 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573533 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-10-09 10:39:01.573542 | orchestrator | [WARNING]: Skipped 2025-10-09 10:39:01.573557 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573567 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-10-09 10:39:01.573577 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573586 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-10-09 10:39:01.573596 | orchestrator | [WARNING]: Skipped 2025-10-09 10:39:01.573605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573615 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-10-09 10:39:01.573624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573634 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-10-09 10:39:01.573644 | orchestrator | [WARNING]: Skipped 2025-10-09 10:39:01.573653 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573663 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-10-09 10:39:01.573672 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:39:01.573682 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-10-09 10:39:01.573692 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-09 10:39:01.573701 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-09 10:39:01.573711 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:39:01.573720 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:39:01.573730 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:39:01.573739 | orchestrator | 2025-10-09 10:39:01.573749 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-10-09 10:39:01.573759 | orchestrator | Thursday 09 October 2025 10:36:06 +0000 (0:00:04.510) 0:01:04.558 ****** 2025-10-09 10:39:01.573768 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:39:01.573779 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.573788 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:39:01.573804 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.573814 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:39:01.573824 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.573833 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:39:01.573843 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.573852 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:39:01.573862 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.573876 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:39:01.573886 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.573895 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-10-09 10:39:01.573905 | orchestrator | 2025-10-09 10:39:01.573914 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-10-09 10:39:01.573924 | orchestrator | Thursday 09 October 2025 10:36:29 +0000 (0:00:22.630) 0:01:27.188 ****** 2025-10-09 10:39:01.573934 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:39:01.573944 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.573953 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:39:01.573963 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:39:01.573973 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.573982 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.573992 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:39:01.574001 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.574011 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:39:01.574065 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.574075 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:39:01.574086 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.574095 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-10-09 10:39:01.574105 | orchestrator | 2025-10-09 10:39:01.574115 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-10-09 10:39:01.574124 | orchestrator | Thursday 09 October 2025 10:36:34 +0000 (0:00:05.292) 0:01:32.482 ****** 2025-10-09 10:39:01.574134 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:39:01.574145 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:39:01.574155 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:39:01.574165 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.574202 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.574212 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.574227 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-10-09 10:39:01.574237 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:39:01.574247 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.574256 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:39:01.574277 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.574286 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:39:01.574296 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.574306 | orchestrator | 2025-10-09 10:39:01.574315 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-10-09 10:39:01.574325 | orchestrator | Thursday 09 October 2025 10:36:38 +0000 (0:00:03.825) 0:01:36.307 ****** 2025-10-09 10:39:01.574335 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:39:01.574344 | orchestrator | 2025-10-09 10:39:01.574354 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-10-09 10:39:01.574364 | orchestrator | Thursday 09 October 2025 10:36:39 +0000 (0:00:01.428) 0:01:37.736 ****** 2025-10-09 10:39:01.574373 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:39:01.574383 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.574392 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.574402 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.574412 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.574421 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.574431 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.574440 | orchestrator | 2025-10-09 10:39:01.574450 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-10-09 10:39:01.574460 | orchestrator | Thursday 09 October 2025 10:36:41 +0000 (0:00:01.566) 0:01:39.302 ****** 2025-10-09 10:39:01.574469 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:39:01.574478 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:01.574488 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.574497 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:39:01.574507 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.574516 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.574526 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:39:01.574535 | orchestrator | 2025-10-09 10:39:01.574545 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-10-09 10:39:01.574555 | orchestrator | Thursday 09 October 2025 10:36:47 +0000 (0:00:05.765) 0:01:45.067 ****** 2025-10-09 10:39:01.574564 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:39:01.574579 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:39:01.574589 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.574599 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:39:01.574608 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:39:01.574618 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.574627 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:39:01.574637 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.574646 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:39:01.574656 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.574666 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:39:01.574675 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.574685 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:39:01.574694 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.574704 | orchestrator | 2025-10-09 10:39:01.574713 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-10-09 10:39:01.574723 | orchestrator | Thursday 09 October 2025 10:36:51 +0000 (0:00:04.162) 0:01:49.230 ****** 2025-10-09 10:39:01.574732 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:39:01.574748 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.574757 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-10-09 10:39:01.574767 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:39:01.574777 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.574786 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:39:01.574796 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.574805 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:39:01.574815 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.574824 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:39:01.574834 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.574844 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:39:01.574853 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.574863 | orchestrator | 2025-10-09 10:39:01.574877 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-10-09 10:39:01.574887 | orchestrator | Thursday 09 October 2025 10:36:54 +0000 (0:00:03.616) 0:01:52.846 ****** 2025-10-09 10:39:01.574897 | orchestrator | [WARNING]: Skipped 2025-10-09 10:39:01.574907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-10-09 10:39:01.574916 | orchestrator | due to this access issue: 2025-10-09 10:39:01.574926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-10-09 10:39:01.574936 | orchestrator | not a directory 2025-10-09 10:39:01.574945 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:39:01.574955 | orchestrator | 2025-10-09 10:39:01.574965 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-10-09 10:39:01.574975 | orchestrator | Thursday 09 October 2025 10:36:56 +0000 (0:00:01.944) 0:01:54.791 ****** 2025-10-09 10:39:01.574984 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:39:01.574994 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.575003 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.575013 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.575022 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.575032 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.575042 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.575051 | orchestrator | 2025-10-09 10:39:01.575061 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-10-09 10:39:01.575070 | orchestrator | Thursday 09 October 2025 10:36:58 +0000 (0:00:01.339) 0:01:56.130 ****** 2025-10-09 10:39:01.575080 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:39:01.575090 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:01.575099 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:01.575109 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:01.575118 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:01.575128 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:01.575138 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:01.575147 | orchestrator | 2025-10-09 10:39:01.575157 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-10-09 10:39:01.575167 | orchestrator | Thursday 09 October 2025 10:36:59 +0000 (0:00:01.557) 0:01:57.687 ****** 2025-10-09 10:39:01.575193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.575214 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:39:01.575225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.575235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.575251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575273 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.575283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.575314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575334 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.575361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:39:01.575371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575502 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:39:01.575513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:39:01.575539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:39:01.575565 | orchestrator | 2025-10-09 10:39:01.575575 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-10-09 10:39:01.575585 | orchestrator | Thursday 09 October 2025 10:37:05 +0000 (0:00:05.591) 0:02:03.281 ****** 2025-10-09 10:39:01.575595 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-09 10:39:01.575604 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:39:01.575614 | orchestrator | 2025-10-09 10:39:01.575624 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:39:01.575633 | orchestrator | Thursday 09 October 2025 10:37:06 +0000 (0:00:01.606) 0:02:04.887 ****** 2025-10-09 10:39:01.575642 | orchestrator | 2025-10-09 10:39:01.575652 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:39:01.575661 | orchestrator | Thursday 09 October 2025 10:37:07 +0000 (0:00:00.159) 0:02:05.047 ****** 2025-10-09 10:39:01.575671 | orchestrator | 2025-10-09 10:39:01.575681 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:39:01.575690 | orchestrator | Thursday 09 October 2025 10:37:07 +0000 (0:00:00.122) 0:02:05.169 ****** 2025-10-09 10:39:01.575700 | orchestrator | 2025-10-09 10:39:01.575714 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:39:01.575724 | orchestrator | Thursday 09 October 2025 10:37:07 +0000 (0:00:00.076) 0:02:05.246 ****** 2025-10-09 10:39:01.575733 | orchestrator | 2025-10-09 10:39:01.575743 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:39:01.575752 | orchestrator | Thursday 09 October 2025 10:37:07 +0000 (0:00:00.314) 0:02:05.560 ****** 2025-10-09 10:39:01.575762 | orchestrator | 2025-10-09 10:39:01.575772 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:39:01.575781 | orchestrator | Thursday 09 October 2025 10:37:07 +0000 (0:00:00.071) 0:02:05.632 ****** 2025-10-09 10:39:01.575791 | orchestrator | 2025-10-09 10:39:01.575800 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:39:01.575810 | orchestrator | Thursday 09 October 2025 10:37:07 +0000 (0:00:00.068) 0:02:05.701 ****** 2025-10-09 10:39:01.575819 | orchestrator | 2025-10-09 10:39:01.575829 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-10-09 10:39:01.575838 | orchestrator | Thursday 09 October 2025 10:37:07 +0000 (0:00:00.095) 0:02:05.797 ****** 2025-10-09 10:39:01.575848 | orchestrator | changed: [testbed-manager] 2025-10-09 10:39:01.575857 | orchestrator | 2025-10-09 10:39:01.575867 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-10-09 10:39:01.575877 | orchestrator | Thursday 09 October 2025 10:37:27 +0000 (0:00:19.579) 0:02:25.376 ****** 2025-10-09 10:39:01.575886 | orchestrator | changed: [testbed-manager] 2025-10-09 10:39:01.575896 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:39:01.575905 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:39:01.575915 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:39:01.575924 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:01.575934 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:39:01.575944 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:39:01.575953 | orchestrator | 2025-10-09 10:39:01.575963 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-10-09 10:39:01.575972 | orchestrator | Thursday 09 October 2025 10:37:40 +0000 (0:00:12.899) 0:02:38.276 ****** 2025-10-09 10:39:01.575982 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:39:01.575991 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:39:01.576001 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:01.576010 | orchestrator | 2025-10-09 10:39:01.576020 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-10-09 10:39:01.576035 | orchestrator | Thursday 09 October 2025 10:37:50 +0000 (0:00:10.205) 0:02:48.481 ****** 2025-10-09 10:39:01.576044 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:01.576054 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:39:01.576064 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:39:01.576073 | orchestrator | 2025-10-09 10:39:01.576083 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-10-09 10:39:01.576093 | orchestrator | Thursday 09 October 2025 10:38:01 +0000 (0:00:10.781) 0:02:59.262 ****** 2025-10-09 10:39:01.576102 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:01.576112 | orchestrator | changed: [testbed-manager] 2025-10-09 10:39:01.576125 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:39:01.576136 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:39:01.576145 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:39:01.576155 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:39:01.576164 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:39:01.576187 | orchestrator | 2025-10-09 10:39:01.576198 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-10-09 10:39:01.576207 | orchestrator | Thursday 09 October 2025 10:38:19 +0000 (0:00:18.033) 0:03:17.295 ****** 2025-10-09 10:39:01.576217 | orchestrator | changed: [testbed-manager] 2025-10-09 10:39:01.576226 | orchestrator | 2025-10-09 10:39:01.576236 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-10-09 10:39:01.576246 | orchestrator | Thursday 09 October 2025 10:38:29 +0000 (0:00:10.434) 0:03:27.730 ****** 2025-10-09 10:39:01.576256 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:39:01.576265 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:01.576275 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:39:01.576285 | orchestrator | 2025-10-09 10:39:01.576294 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-10-09 10:39:01.576304 | orchestrator | Thursday 09 October 2025 10:38:39 +0000 (0:00:09.736) 0:03:37.466 ****** 2025-10-09 10:39:01.576314 | orchestrator | changed: [testbed-manager] 2025-10-09 10:39:01.576323 | orchestrator | 2025-10-09 10:39:01.576333 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-10-09 10:39:01.576343 | orchestrator | Thursday 09 October 2025 10:38:49 +0000 (0:00:09.774) 0:03:47.241 ****** 2025-10-09 10:39:01.576352 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:39:01.576362 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:39:01.576372 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:39:01.576381 | orchestrator | 2025-10-09 10:39:01.576391 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:39:01.576400 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:39:01.576411 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:39:01.576421 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:39:01.576431 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:39:01.576441 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-09 10:39:01.576455 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-09 10:39:01.576465 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-09 10:39:01.576474 | orchestrator | 2025-10-09 10:39:01.576484 | orchestrator | 2025-10-09 10:39:01.576502 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:39:01.576512 | orchestrator | Thursday 09 October 2025 10:39:00 +0000 (0:00:11.602) 0:03:58.844 ****** 2025-10-09 10:39:01.576522 | orchestrator | =============================================================================== 2025-10-09 10:39:01.576532 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 31.14s 2025-10-09 10:39:01.576541 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 22.63s 2025-10-09 10:39:01.576551 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.58s 2025-10-09 10:39:01.576561 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.03s 2025-10-09 10:39:01.576570 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.90s 2025-10-09 10:39:01.576580 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.60s 2025-10-09 10:39:01.576589 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.78s 2025-10-09 10:39:01.576599 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 10.43s 2025-10-09 10:39:01.576609 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.21s 2025-10-09 10:39:01.576618 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.77s 2025-10-09 10:39:01.576628 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.74s 2025-10-09 10:39:01.576637 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.76s 2025-10-09 10:39:01.576647 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.20s 2025-10-09 10:39:01.576656 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 5.77s 2025-10-09 10:39:01.576666 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.59s 2025-10-09 10:39:01.576675 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.29s 2025-10-09 10:39:01.576685 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 4.51s 2025-10-09 10:39:01.576695 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 4.16s 2025-10-09 10:39:01.576709 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.99s 2025-10-09 10:39:01.576719 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.83s 2025-10-09 10:39:04.617322 | orchestrator | 2025-10-09 10:39:04 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:04.619110 | orchestrator | 2025-10-09 10:39:04 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:04.621607 | orchestrator | 2025-10-09 10:39:04 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:04.623526 | orchestrator | 2025-10-09 10:39:04 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:04.623865 | orchestrator | 2025-10-09 10:39:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:07.671356 | orchestrator | 2025-10-09 10:39:07 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:07.672718 | orchestrator | 2025-10-09 10:39:07 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:07.673939 | orchestrator | 2025-10-09 10:39:07 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:07.675780 | orchestrator | 2025-10-09 10:39:07 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:07.676094 | orchestrator | 2025-10-09 10:39:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:10.723019 | orchestrator | 2025-10-09 10:39:10 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:10.724508 | orchestrator | 2025-10-09 10:39:10 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:10.726213 | orchestrator | 2025-10-09 10:39:10 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:10.727507 | orchestrator | 2025-10-09 10:39:10 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:10.727530 | orchestrator | 2025-10-09 10:39:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:13.774461 | orchestrator | 2025-10-09 10:39:13 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:13.776920 | orchestrator | 2025-10-09 10:39:13 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:13.779369 | orchestrator | 2025-10-09 10:39:13 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:13.781541 | orchestrator | 2025-10-09 10:39:13 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:13.781568 | orchestrator | 2025-10-09 10:39:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:16.820448 | orchestrator | 2025-10-09 10:39:16 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:16.820574 | orchestrator | 2025-10-09 10:39:16 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:16.820610 | orchestrator | 2025-10-09 10:39:16 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:16.821416 | orchestrator | 2025-10-09 10:39:16 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:16.821548 | orchestrator | 2025-10-09 10:39:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:19.862339 | orchestrator | 2025-10-09 10:39:19 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:19.862455 | orchestrator | 2025-10-09 10:39:19 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:19.862467 | orchestrator | 2025-10-09 10:39:19 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:19.862477 | orchestrator | 2025-10-09 10:39:19 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:19.862487 | orchestrator | 2025-10-09 10:39:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:22.892256 | orchestrator | 2025-10-09 10:39:22 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:22.892767 | orchestrator | 2025-10-09 10:39:22 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:22.894296 | orchestrator | 2025-10-09 10:39:22 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:22.895890 | orchestrator | 2025-10-09 10:39:22 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:22.895915 | orchestrator | 2025-10-09 10:39:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:25.939645 | orchestrator | 2025-10-09 10:39:25 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:25.939944 | orchestrator | 2025-10-09 10:39:25 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:25.940834 | orchestrator | 2025-10-09 10:39:25 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:25.941691 | orchestrator | 2025-10-09 10:39:25 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:25.941725 | orchestrator | 2025-10-09 10:39:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:28.979769 | orchestrator | 2025-10-09 10:39:28 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:28.980108 | orchestrator | 2025-10-09 10:39:28 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:28.981960 | orchestrator | 2025-10-09 10:39:28 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:28.982868 | orchestrator | 2025-10-09 10:39:28 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:28.982922 | orchestrator | 2025-10-09 10:39:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:32.035409 | orchestrator | 2025-10-09 10:39:32 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:32.035528 | orchestrator | 2025-10-09 10:39:32 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:32.035829 | orchestrator | 2025-10-09 10:39:32 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:32.036739 | orchestrator | 2025-10-09 10:39:32 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:32.036765 | orchestrator | 2025-10-09 10:39:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:35.073759 | orchestrator | 2025-10-09 10:39:35 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:35.076226 | orchestrator | 2025-10-09 10:39:35 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:35.080135 | orchestrator | 2025-10-09 10:39:35 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:35.080196 | orchestrator | 2025-10-09 10:39:35 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:35.080211 | orchestrator | 2025-10-09 10:39:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:38.114315 | orchestrator | 2025-10-09 10:39:38 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:38.120742 | orchestrator | 2025-10-09 10:39:38 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:38.122094 | orchestrator | 2025-10-09 10:39:38 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:38.122973 | orchestrator | 2025-10-09 10:39:38 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:38.122993 | orchestrator | 2025-10-09 10:39:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:41.217506 | orchestrator | 2025-10-09 10:39:41 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:41.217603 | orchestrator | 2025-10-09 10:39:41 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:41.218297 | orchestrator | 2025-10-09 10:39:41 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:41.219023 | orchestrator | 2025-10-09 10:39:41 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:41.219044 | orchestrator | 2025-10-09 10:39:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:44.246088 | orchestrator | 2025-10-09 10:39:44 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:44.246514 | orchestrator | 2025-10-09 10:39:44 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:44.247008 | orchestrator | 2025-10-09 10:39:44 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:44.247852 | orchestrator | 2025-10-09 10:39:44 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state STARTED 2025-10-09 10:39:44.247877 | orchestrator | 2025-10-09 10:39:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:47.276994 | orchestrator | 2025-10-09 10:39:47 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:47.277499 | orchestrator | 2025-10-09 10:39:47 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:47.278213 | orchestrator | 2025-10-09 10:39:47 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:47.279806 | orchestrator | 2025-10-09 10:39:47 | INFO  | Task 27a1d0be-2ae6-40f0-bca2-7cb3c2b380ae is in state SUCCESS 2025-10-09 10:39:47.281147 | orchestrator | 2025-10-09 10:39:47.281204 | orchestrator | 2025-10-09 10:39:47.281275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:39:47.281288 | orchestrator | 2025-10-09 10:39:47.281299 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:39:47.281310 | orchestrator | Thursday 09 October 2025 10:35:27 +0000 (0:00:00.287) 0:00:00.287 ****** 2025-10-09 10:39:47.281321 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:39:47.281444 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:39:47.281458 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:39:47.281469 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:39:47.281480 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:39:47.281491 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:39:47.281501 | orchestrator | 2025-10-09 10:39:47.281590 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:39:47.281604 | orchestrator | Thursday 09 October 2025 10:35:28 +0000 (0:00:00.753) 0:00:01.040 ****** 2025-10-09 10:39:47.281615 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-10-09 10:39:47.281626 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-10-09 10:39:47.281637 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-10-09 10:39:47.281648 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-10-09 10:39:47.281659 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-10-09 10:39:47.281669 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-10-09 10:39:47.281680 | orchestrator | 2025-10-09 10:39:47.281691 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-10-09 10:39:47.281701 | orchestrator | 2025-10-09 10:39:47.281712 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:39:47.281723 | orchestrator | Thursday 09 October 2025 10:35:28 +0000 (0:00:00.624) 0:00:01.665 ****** 2025-10-09 10:39:47.281734 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:39:47.281746 | orchestrator | 2025-10-09 10:39:47.281760 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-10-09 10:39:47.281772 | orchestrator | Thursday 09 October 2025 10:35:30 +0000 (0:00:01.298) 0:00:02.963 ****** 2025-10-09 10:39:47.281785 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-10-09 10:39:47.281798 | orchestrator | 2025-10-09 10:39:47.281826 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-10-09 10:39:47.281839 | orchestrator | Thursday 09 October 2025 10:35:33 +0000 (0:00:03.779) 0:00:06.743 ****** 2025-10-09 10:39:47.281852 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-10-09 10:39:47.281865 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-10-09 10:39:47.281877 | orchestrator | 2025-10-09 10:39:47.281889 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-10-09 10:39:47.281923 | orchestrator | Thursday 09 October 2025 10:35:40 +0000 (0:00:06.780) 0:00:13.523 ****** 2025-10-09 10:39:47.281936 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:39:47.281948 | orchestrator | 2025-10-09 10:39:47.281961 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-10-09 10:39:47.281973 | orchestrator | Thursday 09 October 2025 10:35:44 +0000 (0:00:03.459) 0:00:16.983 ****** 2025-10-09 10:39:47.281986 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:39:47.281998 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-10-09 10:39:47.282009 | orchestrator | 2025-10-09 10:39:47.282066 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-10-09 10:39:47.282078 | orchestrator | Thursday 09 October 2025 10:35:47 +0000 (0:00:03.782) 0:00:20.766 ****** 2025-10-09 10:39:47.282090 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:39:47.282102 | orchestrator | 2025-10-09 10:39:47.282114 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-10-09 10:39:47.282127 | orchestrator | Thursday 09 October 2025 10:35:50 +0000 (0:00:03.078) 0:00:23.844 ****** 2025-10-09 10:39:47.282138 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-10-09 10:39:47.282148 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-10-09 10:39:47.282159 | orchestrator | 2025-10-09 10:39:47.282199 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-10-09 10:39:47.282211 | orchestrator | Thursday 09 October 2025 10:35:58 +0000 (0:00:07.780) 0:00:31.625 ****** 2025-10-09 10:39:47.282244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.282259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.282272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.282312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.282514 | orchestrator | 2025-10-09 10:39:47.282526 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:39:47.282537 | orchestrator | Thursday 09 October 2025 10:36:01 +0000 (0:00:02.953) 0:00:34.578 ****** 2025-10-09 10:39:47.282548 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.282559 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.282570 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.282581 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:47.282592 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:47.282603 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:47.282613 | orchestrator | 2025-10-09 10:39:47.282624 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:39:47.282635 | orchestrator | Thursday 09 October 2025 10:36:02 +0000 (0:00:01.163) 0:00:35.741 ****** 2025-10-09 10:39:47.282646 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.282657 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.282675 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.282686 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:39:47.282697 | orchestrator | 2025-10-09 10:39:47.282708 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-10-09 10:39:47.282719 | orchestrator | Thursday 09 October 2025 10:36:05 +0000 (0:00:02.454) 0:00:38.196 ****** 2025-10-09 10:39:47.282730 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-10-09 10:39:47.282741 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-10-09 10:39:47.282751 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-10-09 10:39:47.282762 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-10-09 10:39:47.282773 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-10-09 10:39:47.282783 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-10-09 10:39:47.282794 | orchestrator | 2025-10-09 10:39:47.282805 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-10-09 10:39:47.282821 | orchestrator | Thursday 09 October 2025 10:36:08 +0000 (0:00:02.863) 0:00:41.059 ****** 2025-10-09 10:39:47.282833 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:39:47.282846 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:39:47.282873 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:39:47.282885 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:39:47.282909 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:39:47.282921 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:39:47.282933 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:39:47.282951 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:39:47.282969 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:39:47.282986 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:39:47.282999 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:39:47.283010 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:39:47.283022 | orchestrator | 2025-10-09 10:39:47.283033 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-10-09 10:39:47.283044 | orchestrator | Thursday 09 October 2025 10:36:13 +0000 (0:00:05.543) 0:00:46.602 ****** 2025-10-09 10:39:47.283055 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:39:47.283067 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:39:47.283078 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:39:47.283088 | orchestrator | 2025-10-09 10:39:47.283099 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-10-09 10:39:47.283115 | orchestrator | Thursday 09 October 2025 10:36:16 +0000 (0:00:02.555) 0:00:49.158 ****** 2025-10-09 10:39:47.283131 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-10-09 10:39:47.283143 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-10-09 10:39:47.283154 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-10-09 10:39:47.283183 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:39:47.283195 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:39:47.283206 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:39:47.283216 | orchestrator | 2025-10-09 10:39:47.283227 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-10-09 10:39:47.283238 | orchestrator | Thursday 09 October 2025 10:36:19 +0000 (0:00:03.541) 0:00:52.700 ****** 2025-10-09 10:39:47.283249 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-10-09 10:39:47.283260 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-10-09 10:39:47.283270 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-10-09 10:39:47.283281 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-10-09 10:39:47.283292 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-10-09 10:39:47.283336 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-10-09 10:39:47.283348 | orchestrator | 2025-10-09 10:39:47.283359 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-10-09 10:39:47.283369 | orchestrator | Thursday 09 October 2025 10:36:20 +0000 (0:00:01.211) 0:00:53.912 ****** 2025-10-09 10:39:47.283380 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.283391 | orchestrator | 2025-10-09 10:39:47.283402 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-10-09 10:39:47.283413 | orchestrator | Thursday 09 October 2025 10:36:21 +0000 (0:00:00.148) 0:00:54.061 ****** 2025-10-09 10:39:47.283424 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.283435 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.283445 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.283456 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:47.283467 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:47.283478 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:47.283489 | orchestrator | 2025-10-09 10:39:47.283500 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:39:47.283516 | orchestrator | Thursday 09 October 2025 10:36:22 +0000 (0:00:00.986) 0:00:55.048 ****** 2025-10-09 10:39:47.283527 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:39:47.283540 | orchestrator | 2025-10-09 10:39:47.283550 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-10-09 10:39:47.283561 | orchestrator | Thursday 09 October 2025 10:36:23 +0000 (0:00:01.530) 0:00:56.578 ****** 2025-10-09 10:39:47.283573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.283600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.283620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.283632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.283766 | orchestrator | 2025-10-09 10:39:47.283778 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-10-09 10:39:47.283788 | orchestrator | Thursday 09 October 2025 10:36:27 +0000 (0:00:03.621) 0:01:00.200 ****** 2025-10-09 10:39:47.283806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.283818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.283830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.283845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.283857 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.283868 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.283890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.283901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.283912 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.283930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.283942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.283953 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:47.283969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.283989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284000 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:47.284012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284048 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:47.284059 | orchestrator | 2025-10-09 10:39:47.284070 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-10-09 10:39:47.284081 | orchestrator | Thursday 09 October 2025 10:36:29 +0000 (0:00:02.449) 0:01:02.649 ****** 2025-10-09 10:39:47.284092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.284109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284127 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.284138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.284150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284161 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.284228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.284241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284252 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.284268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284299 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:47.284310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284339 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:47.284350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.284385 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:47.284396 | orchestrator | 2025-10-09 10:39:47.284407 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-10-09 10:39:47.284418 | orchestrator | Thursday 09 October 2025 10:36:32 +0000 (0:00:03.237) 0:01:05.886 ****** 2025-10-09 10:39:47.284429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.284441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.284459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.284471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284522 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284608 | orchestrator | 2025-10-09 10:39:47.284618 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-10-09 10:39:47.284630 | orchestrator | Thursday 09 October 2025 10:36:36 +0000 (0:00:04.007) 0:01:09.894 ****** 2025-10-09 10:39:47.284641 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-09 10:39:47.284652 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:47.284663 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-09 10:39:47.284673 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:47.284684 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-09 10:39:47.284695 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-09 10:39:47.284706 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:47.284716 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-09 10:39:47.284731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-09 10:39:47.284740 | orchestrator | 2025-10-09 10:39:47.284750 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-10-09 10:39:47.284760 | orchestrator | Thursday 09 October 2025 10:36:39 +0000 (0:00:02.960) 0:01:12.855 ****** 2025-10-09 10:39:47.284770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.284790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.284801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.284812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.284931 | orchestrator | 2025-10-09 10:39:47.284940 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-10-09 10:39:47.284950 | orchestrator | Thursday 09 October 2025 10:36:55 +0000 (0:00:15.895) 0:01:28.751 ****** 2025-10-09 10:39:47.284960 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.284970 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.284979 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.284989 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:39:47.284998 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:39:47.285012 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:39:47.285022 | orchestrator | 2025-10-09 10:39:47.285031 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-10-09 10:39:47.285041 | orchestrator | Thursday 09 October 2025 10:36:58 +0000 (0:00:02.708) 0:01:31.459 ****** 2025-10-09 10:39:47.285051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.285061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.285093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285103 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.285113 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.285128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:39:47.285139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285149 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.285158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285201 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:47.285274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285297 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:47.285311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:39:47.285332 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:47.285342 | orchestrator | 2025-10-09 10:39:47.285352 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-10-09 10:39:47.285362 | orchestrator | Thursday 09 October 2025 10:37:00 +0000 (0:00:02.469) 0:01:33.929 ****** 2025-10-09 10:39:47.285377 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.285387 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.285397 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.285406 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:47.285416 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:47.285425 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:47.285435 | orchestrator | 2025-10-09 10:39:47.285445 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-10-09 10:39:47.285454 | orchestrator | Thursday 09 October 2025 10:37:02 +0000 (0:00:01.651) 0:01:35.580 ****** 2025-10-09 10:39:47.285471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.285482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.285500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:39:47.285510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:39:47.285627 | orchestrator | 2025-10-09 10:39:47.285636 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:39:47.285646 | orchestrator | Thursday 09 October 2025 10:37:06 +0000 (0:00:03.905) 0:01:39.486 ****** 2025-10-09 10:39:47.285656 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.285666 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:39:47.285675 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:39:47.285685 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:39:47.285695 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:39:47.285704 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:39:47.285714 | orchestrator | 2025-10-09 10:39:47.285724 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-10-09 10:39:47.285734 | orchestrator | Thursday 09 October 2025 10:37:07 +0000 (0:00:00.795) 0:01:40.281 ****** 2025-10-09 10:39:47.285743 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:47.285753 | orchestrator | 2025-10-09 10:39:47.285762 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-10-09 10:39:47.285773 | orchestrator | Thursday 09 October 2025 10:37:09 +0000 (0:00:02.646) 0:01:42.927 ****** 2025-10-09 10:39:47.285783 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:47.285793 | orchestrator | 2025-10-09 10:39:47.285803 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-10-09 10:39:47.285812 | orchestrator | Thursday 09 October 2025 10:37:12 +0000 (0:00:02.269) 0:01:45.197 ****** 2025-10-09 10:39:47.285827 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:47.285837 | orchestrator | 2025-10-09 10:39:47.285846 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:39:47.285856 | orchestrator | Thursday 09 October 2025 10:37:33 +0000 (0:00:21.391) 0:02:06.589 ****** 2025-10-09 10:39:47.285865 | orchestrator | 2025-10-09 10:39:47.285875 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:39:47.285891 | orchestrator | Thursday 09 October 2025 10:37:33 +0000 (0:00:00.067) 0:02:06.657 ****** 2025-10-09 10:39:47.285901 | orchestrator | 2025-10-09 10:39:47.285911 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:39:47.285921 | orchestrator | Thursday 09 October 2025 10:37:33 +0000 (0:00:00.068) 0:02:06.725 ****** 2025-10-09 10:39:47.285931 | orchestrator | 2025-10-09 10:39:47.285940 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:39:47.285950 | orchestrator | Thursday 09 October 2025 10:37:33 +0000 (0:00:00.094) 0:02:06.820 ****** 2025-10-09 10:39:47.285960 | orchestrator | 2025-10-09 10:39:47.285970 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:39:47.285980 | orchestrator | Thursday 09 October 2025 10:37:33 +0000 (0:00:00.061) 0:02:06.882 ****** 2025-10-09 10:39:47.285989 | orchestrator | 2025-10-09 10:39:47.285999 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:39:47.286009 | orchestrator | Thursday 09 October 2025 10:37:34 +0000 (0:00:00.081) 0:02:06.964 ****** 2025-10-09 10:39:47.286047 | orchestrator | 2025-10-09 10:39:47.286059 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-10-09 10:39:47.286069 | orchestrator | Thursday 09 October 2025 10:37:34 +0000 (0:00:00.069) 0:02:07.033 ****** 2025-10-09 10:39:47.286079 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:47.286088 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:39:47.286098 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:39:47.286108 | orchestrator | 2025-10-09 10:39:47.286118 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-10-09 10:39:47.286128 | orchestrator | Thursday 09 October 2025 10:38:02 +0000 (0:00:28.869) 0:02:35.903 ****** 2025-10-09 10:39:47.286138 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:39:47.286147 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:39:47.286158 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:39:47.286211 | orchestrator | 2025-10-09 10:39:47.286222 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-10-09 10:39:47.286232 | orchestrator | Thursday 09 October 2025 10:38:18 +0000 (0:00:15.734) 0:02:51.638 ****** 2025-10-09 10:39:47.286241 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:39:47.286251 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:39:47.286260 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:39:47.286270 | orchestrator | 2025-10-09 10:39:47.286279 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-10-09 10:39:47.286289 | orchestrator | Thursday 09 October 2025 10:39:36 +0000 (0:01:18.289) 0:04:09.927 ****** 2025-10-09 10:39:47.286299 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:39:47.286308 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:39:47.286318 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:39:47.286327 | orchestrator | 2025-10-09 10:39:47.286337 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-10-09 10:39:47.286346 | orchestrator | Thursday 09 October 2025 10:39:44 +0000 (0:00:07.860) 0:04:17.788 ****** 2025-10-09 10:39:47.286356 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:39:47.286365 | orchestrator | 2025-10-09 10:39:47.286375 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:39:47.286391 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:39:47.286402 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:39:47.286412 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:39:47.286421 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:39:47.286438 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:39:47.286447 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:39:47.286455 | orchestrator | 2025-10-09 10:39:47.286462 | orchestrator | 2025-10-09 10:39:47.286470 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:39:47.286478 | orchestrator | Thursday 09 October 2025 10:39:45 +0000 (0:00:00.951) 0:04:18.739 ****** 2025-10-09 10:39:47.286486 | orchestrator | =============================================================================== 2025-10-09 10:39:47.286493 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 78.29s 2025-10-09 10:39:47.286501 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 28.87s 2025-10-09 10:39:47.286509 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.39s 2025-10-09 10:39:47.286517 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.90s 2025-10-09 10:39:47.286525 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 15.73s 2025-10-09 10:39:47.286614 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.86s 2025-10-09 10:39:47.286629 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.78s 2025-10-09 10:39:47.286637 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.78s 2025-10-09 10:39:47.286645 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.54s 2025-10-09 10:39:47.286653 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.01s 2025-10-09 10:39:47.286661 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.90s 2025-10-09 10:39:47.286669 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.78s 2025-10-09 10:39:47.286677 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.78s 2025-10-09 10:39:47.286685 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.62s 2025-10-09 10:39:47.286693 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.54s 2025-10-09 10:39:47.286701 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.46s 2025-10-09 10:39:47.286709 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.24s 2025-10-09 10:39:47.286717 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.08s 2025-10-09 10:39:47.286724 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.96s 2025-10-09 10:39:47.286732 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.95s 2025-10-09 10:39:47.286740 | orchestrator | 2025-10-09 10:39:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:50.313537 | orchestrator | 2025-10-09 10:39:50 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:50.313635 | orchestrator | 2025-10-09 10:39:50 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:50.314206 | orchestrator | 2025-10-09 10:39:50 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:50.315476 | orchestrator | 2025-10-09 10:39:50 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:39:50.315501 | orchestrator | 2025-10-09 10:39:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:53.352935 | orchestrator | 2025-10-09 10:39:53 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:53.353262 | orchestrator | 2025-10-09 10:39:53 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:53.354138 | orchestrator | 2025-10-09 10:39:53 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:53.354621 | orchestrator | 2025-10-09 10:39:53 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:39:53.354644 | orchestrator | 2025-10-09 10:39:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:56.393751 | orchestrator | 2025-10-09 10:39:56 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:56.395792 | orchestrator | 2025-10-09 10:39:56 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:56.396917 | orchestrator | 2025-10-09 10:39:56 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:56.397980 | orchestrator | 2025-10-09 10:39:56 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:39:56.398004 | orchestrator | 2025-10-09 10:39:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:59.438806 | orchestrator | 2025-10-09 10:39:59 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:39:59.439324 | orchestrator | 2025-10-09 10:39:59 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:39:59.440049 | orchestrator | 2025-10-09 10:39:59 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:39:59.440958 | orchestrator | 2025-10-09 10:39:59 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:39:59.441190 | orchestrator | 2025-10-09 10:39:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:02.475345 | orchestrator | 2025-10-09 10:40:02 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:02.476981 | orchestrator | 2025-10-09 10:40:02 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:02.477922 | orchestrator | 2025-10-09 10:40:02 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:02.479776 | orchestrator | 2025-10-09 10:40:02 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:02.479810 | orchestrator | 2025-10-09 10:40:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:05.516370 | orchestrator | 2025-10-09 10:40:05 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:05.518097 | orchestrator | 2025-10-09 10:40:05 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:05.518978 | orchestrator | 2025-10-09 10:40:05 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:05.520106 | orchestrator | 2025-10-09 10:40:05 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:05.520129 | orchestrator | 2025-10-09 10:40:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:08.559592 | orchestrator | 2025-10-09 10:40:08 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:08.560193 | orchestrator | 2025-10-09 10:40:08 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:08.560972 | orchestrator | 2025-10-09 10:40:08 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:08.561845 | orchestrator | 2025-10-09 10:40:08 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:08.561869 | orchestrator | 2025-10-09 10:40:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:11.601373 | orchestrator | 2025-10-09 10:40:11 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:11.603885 | orchestrator | 2025-10-09 10:40:11 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:11.604626 | orchestrator | 2025-10-09 10:40:11 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:11.605747 | orchestrator | 2025-10-09 10:40:11 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:11.605770 | orchestrator | 2025-10-09 10:40:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:14.713678 | orchestrator | 2025-10-09 10:40:14 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:14.713776 | orchestrator | 2025-10-09 10:40:14 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:14.713789 | orchestrator | 2025-10-09 10:40:14 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:14.713801 | orchestrator | 2025-10-09 10:40:14 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:14.713813 | orchestrator | 2025-10-09 10:40:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:17.698794 | orchestrator | 2025-10-09 10:40:17 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:17.699394 | orchestrator | 2025-10-09 10:40:17 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:17.700344 | orchestrator | 2025-10-09 10:40:17 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:17.702551 | orchestrator | 2025-10-09 10:40:17 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:17.702578 | orchestrator | 2025-10-09 10:40:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:20.743331 | orchestrator | 2025-10-09 10:40:20 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:20.743768 | orchestrator | 2025-10-09 10:40:20 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:20.744652 | orchestrator | 2025-10-09 10:40:20 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:20.745567 | orchestrator | 2025-10-09 10:40:20 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:20.745673 | orchestrator | 2025-10-09 10:40:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:23.789931 | orchestrator | 2025-10-09 10:40:23 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:23.790846 | orchestrator | 2025-10-09 10:40:23 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:23.793702 | orchestrator | 2025-10-09 10:40:23 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:23.794202 | orchestrator | 2025-10-09 10:40:23 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:23.794337 | orchestrator | 2025-10-09 10:40:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:26.827871 | orchestrator | 2025-10-09 10:40:26 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:26.828542 | orchestrator | 2025-10-09 10:40:26 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:26.829304 | orchestrator | 2025-10-09 10:40:26 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:26.830130 | orchestrator | 2025-10-09 10:40:26 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:26.830204 | orchestrator | 2025-10-09 10:40:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:29.861662 | orchestrator | 2025-10-09 10:40:29 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:29.864216 | orchestrator | 2025-10-09 10:40:29 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:29.864672 | orchestrator | 2025-10-09 10:40:29 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:29.865429 | orchestrator | 2025-10-09 10:40:29 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:29.865458 | orchestrator | 2025-10-09 10:40:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:32.896130 | orchestrator | 2025-10-09 10:40:32 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:32.896330 | orchestrator | 2025-10-09 10:40:32 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:32.897049 | orchestrator | 2025-10-09 10:40:32 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:32.897857 | orchestrator | 2025-10-09 10:40:32 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:32.897925 | orchestrator | 2025-10-09 10:40:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:35.925430 | orchestrator | 2025-10-09 10:40:35 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:35.925928 | orchestrator | 2025-10-09 10:40:35 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:35.926879 | orchestrator | 2025-10-09 10:40:35 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:35.927687 | orchestrator | 2025-10-09 10:40:35 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:35.927781 | orchestrator | 2025-10-09 10:40:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:39.006256 | orchestrator | 2025-10-09 10:40:39 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:39.169599 | orchestrator | 2025-10-09 10:40:39 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:39.169670 | orchestrator | 2025-10-09 10:40:39 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:39.169683 | orchestrator | 2025-10-09 10:40:39 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:39.169695 | orchestrator | 2025-10-09 10:40:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:42.073668 | orchestrator | 2025-10-09 10:40:42 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:42.074114 | orchestrator | 2025-10-09 10:40:42 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:42.075043 | orchestrator | 2025-10-09 10:40:42 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:42.075701 | orchestrator | 2025-10-09 10:40:42 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:42.075724 | orchestrator | 2025-10-09 10:40:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:45.121981 | orchestrator | 2025-10-09 10:40:45 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:45.122505 | orchestrator | 2025-10-09 10:40:45 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:45.123506 | orchestrator | 2025-10-09 10:40:45 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:45.124434 | orchestrator | 2025-10-09 10:40:45 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:45.124456 | orchestrator | 2025-10-09 10:40:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:48.159034 | orchestrator | 2025-10-09 10:40:48 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:48.159154 | orchestrator | 2025-10-09 10:40:48 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:48.159227 | orchestrator | 2025-10-09 10:40:48 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:48.159538 | orchestrator | 2025-10-09 10:40:48 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:48.159560 | orchestrator | 2025-10-09 10:40:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:51.214663 | orchestrator | 2025-10-09 10:40:51 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:51.215349 | orchestrator | 2025-10-09 10:40:51 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:51.216267 | orchestrator | 2025-10-09 10:40:51 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:51.217154 | orchestrator | 2025-10-09 10:40:51 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:51.217197 | orchestrator | 2025-10-09 10:40:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:54.248100 | orchestrator | 2025-10-09 10:40:54 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:54.248325 | orchestrator | 2025-10-09 10:40:54 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:54.250972 | orchestrator | 2025-10-09 10:40:54 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:54.251769 | orchestrator | 2025-10-09 10:40:54 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:54.251792 | orchestrator | 2025-10-09 10:40:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:57.293819 | orchestrator | 2025-10-09 10:40:57 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:40:57.293920 | orchestrator | 2025-10-09 10:40:57 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:40:57.294636 | orchestrator | 2025-10-09 10:40:57 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:40:57.295369 | orchestrator | 2025-10-09 10:40:57 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:40:57.295394 | orchestrator | 2025-10-09 10:40:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:00.332111 | orchestrator | 2025-10-09 10:41:00 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:00.333204 | orchestrator | 2025-10-09 10:41:00 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:00.334550 | orchestrator | 2025-10-09 10:41:00 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:00.335979 | orchestrator | 2025-10-09 10:41:00 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:00.336003 | orchestrator | 2025-10-09 10:41:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:03.377366 | orchestrator | 2025-10-09 10:41:03 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:03.377593 | orchestrator | 2025-10-09 10:41:03 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:03.378244 | orchestrator | 2025-10-09 10:41:03 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:03.379107 | orchestrator | 2025-10-09 10:41:03 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:03.379134 | orchestrator | 2025-10-09 10:41:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:06.410830 | orchestrator | 2025-10-09 10:41:06 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:06.411259 | orchestrator | 2025-10-09 10:41:06 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:06.412396 | orchestrator | 2025-10-09 10:41:06 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:06.413888 | orchestrator | 2025-10-09 10:41:06 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:06.414115 | orchestrator | 2025-10-09 10:41:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:09.442136 | orchestrator | 2025-10-09 10:41:09 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:09.442441 | orchestrator | 2025-10-09 10:41:09 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:09.443199 | orchestrator | 2025-10-09 10:41:09 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:09.444077 | orchestrator | 2025-10-09 10:41:09 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:09.444101 | orchestrator | 2025-10-09 10:41:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:12.484145 | orchestrator | 2025-10-09 10:41:12 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:12.485261 | orchestrator | 2025-10-09 10:41:12 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:12.488275 | orchestrator | 2025-10-09 10:41:12 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:12.489766 | orchestrator | 2025-10-09 10:41:12 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:12.489801 | orchestrator | 2025-10-09 10:41:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:15.510413 | orchestrator | 2025-10-09 10:41:15 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:15.510592 | orchestrator | 2025-10-09 10:41:15 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:15.511052 | orchestrator | 2025-10-09 10:41:15 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:15.511882 | orchestrator | 2025-10-09 10:41:15 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:15.511905 | orchestrator | 2025-10-09 10:41:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:18.539426 | orchestrator | 2025-10-09 10:41:18 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:18.539651 | orchestrator | 2025-10-09 10:41:18 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:18.540275 | orchestrator | 2025-10-09 10:41:18 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:18.541565 | orchestrator | 2025-10-09 10:41:18 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:18.541626 | orchestrator | 2025-10-09 10:41:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:21.569258 | orchestrator | 2025-10-09 10:41:21 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:21.570421 | orchestrator | 2025-10-09 10:41:21 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:21.570950 | orchestrator | 2025-10-09 10:41:21 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:21.571674 | orchestrator | 2025-10-09 10:41:21 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:21.571699 | orchestrator | 2025-10-09 10:41:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:24.599624 | orchestrator | 2025-10-09 10:41:24 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state STARTED 2025-10-09 10:41:24.599742 | orchestrator | 2025-10-09 10:41:24 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:24.602341 | orchestrator | 2025-10-09 10:41:24 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:24.602818 | orchestrator | 2025-10-09 10:41:24 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:24.602846 | orchestrator | 2025-10-09 10:41:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:27.633590 | orchestrator | 2025-10-09 10:41:27 | INFO  | Task 9294fc46-96c6-4ed4-b049-ec6650ce3f76 is in state SUCCESS 2025-10-09 10:41:27.634584 | orchestrator | 2025-10-09 10:41:27.634621 | orchestrator | 2025-10-09 10:41:27.634634 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:41:27.634646 | orchestrator | 2025-10-09 10:41:27.634657 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:41:27.634669 | orchestrator | Thursday 09 October 2025 10:39:05 +0000 (0:00:00.316) 0:00:00.316 ****** 2025-10-09 10:41:27.634680 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:41:27.634692 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:41:27.634703 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:41:27.634714 | orchestrator | 2025-10-09 10:41:27.634725 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:41:27.634736 | orchestrator | Thursday 09 October 2025 10:39:06 +0000 (0:00:00.379) 0:00:00.696 ****** 2025-10-09 10:41:27.634747 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-10-09 10:41:27.634758 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-10-09 10:41:27.634769 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-10-09 10:41:27.634779 | orchestrator | 2025-10-09 10:41:27.634807 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-10-09 10:41:27.634819 | orchestrator | 2025-10-09 10:41:27.634829 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-09 10:41:27.634841 | orchestrator | Thursday 09 October 2025 10:39:06 +0000 (0:00:00.451) 0:00:01.147 ****** 2025-10-09 10:41:27.634852 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:41:27.634863 | orchestrator | 2025-10-09 10:41:27.634874 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-10-09 10:41:27.634885 | orchestrator | Thursday 09 October 2025 10:39:07 +0000 (0:00:00.624) 0:00:01.771 ****** 2025-10-09 10:41:27.634896 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-10-09 10:41:27.634907 | orchestrator | 2025-10-09 10:41:27.634917 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-10-09 10:41:27.634928 | orchestrator | Thursday 09 October 2025 10:39:10 +0000 (0:00:03.603) 0:00:05.375 ****** 2025-10-09 10:41:27.634939 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-10-09 10:41:27.634973 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-10-09 10:41:27.634984 | orchestrator | 2025-10-09 10:41:27.634995 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-10-09 10:41:27.635006 | orchestrator | Thursday 09 October 2025 10:39:18 +0000 (0:00:07.672) 0:00:13.048 ****** 2025-10-09 10:41:27.635017 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:41:27.635027 | orchestrator | 2025-10-09 10:41:27.635038 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-10-09 10:41:27.635049 | orchestrator | Thursday 09 October 2025 10:39:22 +0000 (0:00:03.515) 0:00:16.563 ****** 2025-10-09 10:41:27.635060 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:41:27.635070 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-10-09 10:41:27.635081 | orchestrator | 2025-10-09 10:41:27.635092 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-10-09 10:41:27.635103 | orchestrator | Thursday 09 October 2025 10:39:26 +0000 (0:00:04.073) 0:00:20.636 ****** 2025-10-09 10:41:27.635113 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:41:27.635124 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-10-09 10:41:27.635135 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-10-09 10:41:27.635146 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-10-09 10:41:27.635201 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-10-09 10:41:27.635215 | orchestrator | 2025-10-09 10:41:27.635227 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-10-09 10:41:27.635239 | orchestrator | Thursday 09 October 2025 10:39:43 +0000 (0:00:17.382) 0:00:38.019 ****** 2025-10-09 10:41:27.635251 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-10-09 10:41:27.635264 | orchestrator | 2025-10-09 10:41:27.635276 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-10-09 10:41:27.635288 | orchestrator | Thursday 09 October 2025 10:39:48 +0000 (0:00:04.697) 0:00:42.717 ****** 2025-10-09 10:41:27.635304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.635343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.635366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.635380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635479 | orchestrator | 2025-10-09 10:41:27.635499 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-10-09 10:41:27.635512 | orchestrator | Thursday 09 October 2025 10:39:50 +0000 (0:00:02.155) 0:00:44.872 ****** 2025-10-09 10:41:27.635525 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-10-09 10:41:27.635537 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-10-09 10:41:27.635549 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-10-09 10:41:27.635559 | orchestrator | 2025-10-09 10:41:27.635570 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-10-09 10:41:27.635581 | orchestrator | Thursday 09 October 2025 10:39:52 +0000 (0:00:01.711) 0:00:46.584 ****** 2025-10-09 10:41:27.635592 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:27.635603 | orchestrator | 2025-10-09 10:41:27.635614 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-10-09 10:41:27.635625 | orchestrator | Thursday 09 October 2025 10:39:52 +0000 (0:00:00.249) 0:00:46.833 ****** 2025-10-09 10:41:27.635636 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:27.635647 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:27.635658 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:27.635668 | orchestrator | 2025-10-09 10:41:27.635679 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-09 10:41:27.635690 | orchestrator | Thursday 09 October 2025 10:39:53 +0000 (0:00:00.955) 0:00:47.788 ****** 2025-10-09 10:41:27.635701 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:41:27.635712 | orchestrator | 2025-10-09 10:41:27.635723 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-10-09 10:41:27.635734 | orchestrator | Thursday 09 October 2025 10:39:54 +0000 (0:00:00.896) 0:00:48.685 ****** 2025-10-09 10:41:27.635746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.635765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.635789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.635801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.635890 | orchestrator | 2025-10-09 10:41:27.635901 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-10-09 10:41:27.635912 | orchestrator | Thursday 09 October 2025 10:39:58 +0000 (0:00:04.115) 0:00:52.800 ****** 2025-10-09 10:41:27.635924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.635935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.635947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.635965 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:27.635984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.636000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636024 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:27.636036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.636047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636076 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:27.636087 | orchestrator | 2025-10-09 10:41:27.636104 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-10-09 10:41:27.636115 | orchestrator | Thursday 09 October 2025 10:40:00 +0000 (0:00:02.108) 0:00:54.909 ****** 2025-10-09 10:41:27.636131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.636143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636186 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:27.636198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.636222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636252 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:27.636268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.636280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636303 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:27.636320 | orchestrator | 2025-10-09 10:41:27.636331 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-10-09 10:41:27.636342 | orchestrator | Thursday 09 October 2025 10:40:01 +0000 (0:00:00.961) 0:00:55.870 ****** 2025-10-09 10:41:27.636353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.636377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.636389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.636400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636487 | orchestrator | 2025-10-09 10:41:27.636498 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-10-09 10:41:27.636510 | orchestrator | Thursday 09 October 2025 10:40:05 +0000 (0:00:04.281) 0:01:00.151 ****** 2025-10-09 10:41:27.636521 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:27.636531 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:41:27.636542 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:41:27.636553 | orchestrator | 2025-10-09 10:41:27.636564 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-10-09 10:41:27.636575 | orchestrator | Thursday 09 October 2025 10:40:09 +0000 (0:00:03.867) 0:01:04.019 ****** 2025-10-09 10:41:27.636585 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:41:27.636596 | orchestrator | 2025-10-09 10:41:27.636607 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-10-09 10:41:27.636617 | orchestrator | Thursday 09 October 2025 10:40:10 +0000 (0:00:01.155) 0:01:05.174 ****** 2025-10-09 10:41:27.636635 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:27.636645 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:27.636656 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:27.636674 | orchestrator | 2025-10-09 10:41:27.636685 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-10-09 10:41:27.636696 | orchestrator | Thursday 09 October 2025 10:40:11 +0000 (0:00:01.314) 0:01:06.488 ****** 2025-10-09 10:41:27.636707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.636726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.636744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.636755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.636841 | orchestrator | 2025-10-09 10:41:27.636852 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-10-09 10:41:27.636863 | orchestrator | Thursday 09 October 2025 10:40:23 +0000 (0:00:11.357) 0:01:17.846 ****** 2025-10-09 10:41:27.636874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.636893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636916 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:27.636933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.636945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.636983 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:27.636994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:41:27.637005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.637017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:41:27.637028 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:27.637038 | orchestrator | 2025-10-09 10:41:27.637049 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-10-09 10:41:27.637060 | orchestrator | Thursday 09 October 2025 10:40:24 +0000 (0:00:00.809) 0:01:18.655 ****** 2025-10-09 10:41:27.637079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.637096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.637114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:27.637125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.637137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.637183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.637203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.637222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.637234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:27.637245 | orchestrator | 2025-10-09 10:41:27.637256 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-09 10:41:27.637267 | orchestrator | Thursday 09 October 2025 10:40:29 +0000 (0:00:04.943) 0:01:23.599 ****** 2025-10-09 10:41:27.637277 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:27.637288 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:27.637299 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:27.637309 | orchestrator | 2025-10-09 10:41:27.637320 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-10-09 10:41:27.637331 | orchestrator | Thursday 09 October 2025 10:40:29 +0000 (0:00:00.588) 0:01:24.188 ****** 2025-10-09 10:41:27.637341 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:27.637352 | orchestrator | 2025-10-09 10:41:27.637362 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-10-09 10:41:27.637373 | orchestrator | Thursday 09 October 2025 10:40:32 +0000 (0:00:02.631) 0:01:26.825 ****** 2025-10-09 10:41:27.637383 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:27.637394 | orchestrator | 2025-10-09 10:41:27.637404 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-10-09 10:41:27.637415 | orchestrator | Thursday 09 October 2025 10:40:35 +0000 (0:00:02.809) 0:01:29.635 ****** 2025-10-09 10:41:27.637425 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:27.637436 | orchestrator | 2025-10-09 10:41:27.637447 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-09 10:41:27.637458 | orchestrator | Thursday 09 October 2025 10:40:48 +0000 (0:00:13.472) 0:01:43.108 ****** 2025-10-09 10:41:27.637468 | orchestrator | 2025-10-09 10:41:27.637479 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-09 10:41:27.637489 | orchestrator | Thursday 09 October 2025 10:40:48 +0000 (0:00:00.282) 0:01:43.391 ****** 2025-10-09 10:41:27.637500 | orchestrator | 2025-10-09 10:41:27.637510 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-09 10:41:27.637521 | orchestrator | Thursday 09 October 2025 10:40:48 +0000 (0:00:00.149) 0:01:43.540 ****** 2025-10-09 10:41:27.637532 | orchestrator | 2025-10-09 10:41:27.637542 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-10-09 10:41:27.637553 | orchestrator | Thursday 09 October 2025 10:40:49 +0000 (0:00:00.159) 0:01:43.700 ****** 2025-10-09 10:41:27.637564 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:27.637575 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:41:27.637585 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:41:27.637596 | orchestrator | 2025-10-09 10:41:27.637607 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-10-09 10:41:27.637624 | orchestrator | Thursday 09 October 2025 10:41:03 +0000 (0:00:14.830) 0:01:58.530 ****** 2025-10-09 10:41:27.637634 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:27.637645 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:41:27.637662 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:41:27.637673 | orchestrator | 2025-10-09 10:41:27.637684 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-10-09 10:41:27.637694 | orchestrator | Thursday 09 October 2025 10:41:17 +0000 (0:00:14.012) 0:02:12.542 ****** 2025-10-09 10:41:27.637705 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:27.637716 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:41:27.637726 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:41:27.637737 | orchestrator | 2025-10-09 10:41:27.637748 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:41:27.637760 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:41:27.637771 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:41:27.637787 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:41:27.637798 | orchestrator | 2025-10-09 10:41:27.637809 | orchestrator | 2025-10-09 10:41:27.637820 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:41:27.637831 | orchestrator | Thursday 09 October 2025 10:41:25 +0000 (0:00:07.723) 0:02:20.266 ****** 2025-10-09 10:41:27.637842 | orchestrator | =============================================================================== 2025-10-09 10:41:27.637852 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.38s 2025-10-09 10:41:27.637863 | orchestrator | barbican : Restart barbican-api container ------------------------------ 14.83s 2025-10-09 10:41:27.637874 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 14.01s 2025-10-09 10:41:27.637884 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.47s 2025-10-09 10:41:27.637895 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.36s 2025-10-09 10:41:27.637906 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.72s 2025-10-09 10:41:27.637916 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.67s 2025-10-09 10:41:27.637927 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.94s 2025-10-09 10:41:27.637938 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.70s 2025-10-09 10:41:27.637948 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.28s 2025-10-09 10:41:27.637959 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.12s 2025-10-09 10:41:27.637970 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.07s 2025-10-09 10:41:27.637980 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.87s 2025-10-09 10:41:27.637991 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.60s 2025-10-09 10:41:27.638001 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.52s 2025-10-09 10:41:27.638012 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.81s 2025-10-09 10:41:27.638071 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.64s 2025-10-09 10:41:27.638082 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.16s 2025-10-09 10:41:27.638093 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.11s 2025-10-09 10:41:27.638103 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.71s 2025-10-09 10:41:27.638114 | orchestrator | 2025-10-09 10:41:27 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:27.638132 | orchestrator | 2025-10-09 10:41:27 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:27.638143 | orchestrator | 2025-10-09 10:41:27 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:27.638214 | orchestrator | 2025-10-09 10:41:27 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:27.638228 | orchestrator | 2025-10-09 10:41:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:30.660839 | orchestrator | 2025-10-09 10:41:30 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:30.660936 | orchestrator | 2025-10-09 10:41:30 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:30.662285 | orchestrator | 2025-10-09 10:41:30 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:30.662934 | orchestrator | 2025-10-09 10:41:30 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:30.662954 | orchestrator | 2025-10-09 10:41:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:33.690481 | orchestrator | 2025-10-09 10:41:33 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:33.692746 | orchestrator | 2025-10-09 10:41:33 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:33.695100 | orchestrator | 2025-10-09 10:41:33 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:33.696048 | orchestrator | 2025-10-09 10:41:33 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:33.696075 | orchestrator | 2025-10-09 10:41:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:36.726441 | orchestrator | 2025-10-09 10:41:36 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:36.726857 | orchestrator | 2025-10-09 10:41:36 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:36.729139 | orchestrator | 2025-10-09 10:41:36 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:36.730274 | orchestrator | 2025-10-09 10:41:36 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:36.730298 | orchestrator | 2025-10-09 10:41:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:39.773912 | orchestrator | 2025-10-09 10:41:39 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:39.774382 | orchestrator | 2025-10-09 10:41:39 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:39.776891 | orchestrator | 2025-10-09 10:41:39 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:39.779576 | orchestrator | 2025-10-09 10:41:39 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:39.780527 | orchestrator | 2025-10-09 10:41:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:42.811919 | orchestrator | 2025-10-09 10:41:42 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:42.812016 | orchestrator | 2025-10-09 10:41:42 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:42.812994 | orchestrator | 2025-10-09 10:41:42 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:42.815657 | orchestrator | 2025-10-09 10:41:42 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:42.815780 | orchestrator | 2025-10-09 10:41:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:45.850499 | orchestrator | 2025-10-09 10:41:45 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:45.850599 | orchestrator | 2025-10-09 10:41:45 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:45.850681 | orchestrator | 2025-10-09 10:41:45 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:45.851279 | orchestrator | 2025-10-09 10:41:45 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:45.851353 | orchestrator | 2025-10-09 10:41:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:48.881583 | orchestrator | 2025-10-09 10:41:48 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:48.882069 | orchestrator | 2025-10-09 10:41:48 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:48.883014 | orchestrator | 2025-10-09 10:41:48 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:48.884055 | orchestrator | 2025-10-09 10:41:48 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:48.884646 | orchestrator | 2025-10-09 10:41:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:51.916650 | orchestrator | 2025-10-09 10:41:51 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:51.916743 | orchestrator | 2025-10-09 10:41:51 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:51.916982 | orchestrator | 2025-10-09 10:41:51 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:51.918600 | orchestrator | 2025-10-09 10:41:51 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:51.918622 | orchestrator | 2025-10-09 10:41:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:54.955683 | orchestrator | 2025-10-09 10:41:54 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:54.955827 | orchestrator | 2025-10-09 10:41:54 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:54.955913 | orchestrator | 2025-10-09 10:41:54 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:54.957488 | orchestrator | 2025-10-09 10:41:54 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:54.957519 | orchestrator | 2025-10-09 10:41:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:57.987405 | orchestrator | 2025-10-09 10:41:57 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:41:57.988283 | orchestrator | 2025-10-09 10:41:57 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:41:57.990254 | orchestrator | 2025-10-09 10:41:57 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:41:57.991066 | orchestrator | 2025-10-09 10:41:57 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:41:57.991174 | orchestrator | 2025-10-09 10:41:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:01.050314 | orchestrator | 2025-10-09 10:42:01 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:01.050734 | orchestrator | 2025-10-09 10:42:01 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:01.054418 | orchestrator | 2025-10-09 10:42:01 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:01.055315 | orchestrator | 2025-10-09 10:42:01 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:42:01.055339 | orchestrator | 2025-10-09 10:42:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:04.099249 | orchestrator | 2025-10-09 10:42:04 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:04.101509 | orchestrator | 2025-10-09 10:42:04 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:04.102949 | orchestrator | 2025-10-09 10:42:04 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:04.104483 | orchestrator | 2025-10-09 10:42:04 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:42:04.104575 | orchestrator | 2025-10-09 10:42:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:07.170394 | orchestrator | 2025-10-09 10:42:07 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:07.170566 | orchestrator | 2025-10-09 10:42:07 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:07.171344 | orchestrator | 2025-10-09 10:42:07 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:07.171921 | orchestrator | 2025-10-09 10:42:07 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:42:07.171943 | orchestrator | 2025-10-09 10:42:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:10.204828 | orchestrator | 2025-10-09 10:42:10 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:10.205360 | orchestrator | 2025-10-09 10:42:10 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:10.206361 | orchestrator | 2025-10-09 10:42:10 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:10.208933 | orchestrator | 2025-10-09 10:42:10 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:42:10.208955 | orchestrator | 2025-10-09 10:42:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:13.252552 | orchestrator | 2025-10-09 10:42:13 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:13.252950 | orchestrator | 2025-10-09 10:42:13 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:13.254511 | orchestrator | 2025-10-09 10:42:13 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:13.255921 | orchestrator | 2025-10-09 10:42:13 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state STARTED 2025-10-09 10:42:13.255945 | orchestrator | 2025-10-09 10:42:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:16.285059 | orchestrator | 2025-10-09 10:42:16 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:16.285258 | orchestrator | 2025-10-09 10:42:16 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:16.286793 | orchestrator | 2025-10-09 10:42:16 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:16.287468 | orchestrator | 2025-10-09 10:42:16 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:16.288263 | orchestrator | 2025-10-09 10:42:16 | INFO  | Task 30decca4-aa9b-44f5-b2ef-b01177fb00bf is in state SUCCESS 2025-10-09 10:42:16.288286 | orchestrator | 2025-10-09 10:42:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:19.325189 | orchestrator | 2025-10-09 10:42:19 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:19.325290 | orchestrator | 2025-10-09 10:42:19 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:19.325428 | orchestrator | 2025-10-09 10:42:19 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:19.326248 | orchestrator | 2025-10-09 10:42:19 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:19.326273 | orchestrator | 2025-10-09 10:42:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:22.366832 | orchestrator | 2025-10-09 10:42:22 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:22.367042 | orchestrator | 2025-10-09 10:42:22 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:22.367974 | orchestrator | 2025-10-09 10:42:22 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:22.369495 | orchestrator | 2025-10-09 10:42:22 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:22.369571 | orchestrator | 2025-10-09 10:42:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:25.404432 | orchestrator | 2025-10-09 10:42:25 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:25.406275 | orchestrator | 2025-10-09 10:42:25 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:25.407787 | orchestrator | 2025-10-09 10:42:25 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:25.409241 | orchestrator | 2025-10-09 10:42:25 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:25.409509 | orchestrator | 2025-10-09 10:42:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:28.434504 | orchestrator | 2025-10-09 10:42:28 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:28.434938 | orchestrator | 2025-10-09 10:42:28 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:28.435718 | orchestrator | 2025-10-09 10:42:28 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:28.436503 | orchestrator | 2025-10-09 10:42:28 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:28.436703 | orchestrator | 2025-10-09 10:42:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:31.463475 | orchestrator | 2025-10-09 10:42:31 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:31.463869 | orchestrator | 2025-10-09 10:42:31 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:31.464596 | orchestrator | 2025-10-09 10:42:31 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:31.465522 | orchestrator | 2025-10-09 10:42:31 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:31.465549 | orchestrator | 2025-10-09 10:42:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:34.517384 | orchestrator | 2025-10-09 10:42:34 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:34.518262 | orchestrator | 2025-10-09 10:42:34 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:34.520013 | orchestrator | 2025-10-09 10:42:34 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:34.524062 | orchestrator | 2025-10-09 10:42:34 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:34.524121 | orchestrator | 2025-10-09 10:42:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:37.560669 | orchestrator | 2025-10-09 10:42:37 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:37.561313 | orchestrator | 2025-10-09 10:42:37 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:37.562592 | orchestrator | 2025-10-09 10:42:37 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:37.565269 | orchestrator | 2025-10-09 10:42:37 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:37.565370 | orchestrator | 2025-10-09 10:42:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:40.625230 | orchestrator | 2025-10-09 10:42:40 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:40.627064 | orchestrator | 2025-10-09 10:42:40 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:40.631234 | orchestrator | 2025-10-09 10:42:40 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:40.633405 | orchestrator | 2025-10-09 10:42:40 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:40.633436 | orchestrator | 2025-10-09 10:42:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:43.671179 | orchestrator | 2025-10-09 10:42:43 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:43.671355 | orchestrator | 2025-10-09 10:42:43 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:43.671958 | orchestrator | 2025-10-09 10:42:43 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:43.672680 | orchestrator | 2025-10-09 10:42:43 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:43.672704 | orchestrator | 2025-10-09 10:42:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:46.709257 | orchestrator | 2025-10-09 10:42:46 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:46.709865 | orchestrator | 2025-10-09 10:42:46 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:46.711166 | orchestrator | 2025-10-09 10:42:46 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:46.712311 | orchestrator | 2025-10-09 10:42:46 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:46.712576 | orchestrator | 2025-10-09 10:42:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:49.756280 | orchestrator | 2025-10-09 10:42:49 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:49.757730 | orchestrator | 2025-10-09 10:42:49 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:49.759237 | orchestrator | 2025-10-09 10:42:49 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:49.760573 | orchestrator | 2025-10-09 10:42:49 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:49.760602 | orchestrator | 2025-10-09 10:42:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:52.802311 | orchestrator | 2025-10-09 10:42:52 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:52.803514 | orchestrator | 2025-10-09 10:42:52 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:52.807136 | orchestrator | 2025-10-09 10:42:52 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:52.807185 | orchestrator | 2025-10-09 10:42:52 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:52.807198 | orchestrator | 2025-10-09 10:42:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:55.833900 | orchestrator | 2025-10-09 10:42:55 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:55.834267 | orchestrator | 2025-10-09 10:42:55 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:55.834913 | orchestrator | 2025-10-09 10:42:55 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:55.835641 | orchestrator | 2025-10-09 10:42:55 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:55.835758 | orchestrator | 2025-10-09 10:42:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:58.859389 | orchestrator | 2025-10-09 10:42:58 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:42:58.859483 | orchestrator | 2025-10-09 10:42:58 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:42:58.859499 | orchestrator | 2025-10-09 10:42:58 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:42:58.859614 | orchestrator | 2025-10-09 10:42:58 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:42:58.859636 | orchestrator | 2025-10-09 10:42:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:01.881463 | orchestrator | 2025-10-09 10:43:01 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:43:01.882345 | orchestrator | 2025-10-09 10:43:01 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:01.882951 | orchestrator | 2025-10-09 10:43:01 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:01.883775 | orchestrator | 2025-10-09 10:43:01 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:01.883867 | orchestrator | 2025-10-09 10:43:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:04.907342 | orchestrator | 2025-10-09 10:43:04 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:43:04.910411 | orchestrator | 2025-10-09 10:43:04 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:04.911003 | orchestrator | 2025-10-09 10:43:04 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:04.911998 | orchestrator | 2025-10-09 10:43:04 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:04.912020 | orchestrator | 2025-10-09 10:43:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:07.950756 | orchestrator | 2025-10-09 10:43:07 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:43:07.952487 | orchestrator | 2025-10-09 10:43:07 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:07.954270 | orchestrator | 2025-10-09 10:43:07 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:07.955778 | orchestrator | 2025-10-09 10:43:07 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:07.955803 | orchestrator | 2025-10-09 10:43:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:11.015579 | orchestrator | 2025-10-09 10:43:11 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:43:11.016684 | orchestrator | 2025-10-09 10:43:11 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:11.021377 | orchestrator | 2025-10-09 10:43:11 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:11.024711 | orchestrator | 2025-10-09 10:43:11 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:11.025078 | orchestrator | 2025-10-09 10:43:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:14.079831 | orchestrator | 2025-10-09 10:43:14 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:43:14.082348 | orchestrator | 2025-10-09 10:43:14 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:14.084802 | orchestrator | 2025-10-09 10:43:14 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:14.086953 | orchestrator | 2025-10-09 10:43:14 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:14.086979 | orchestrator | 2025-10-09 10:43:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:17.138350 | orchestrator | 2025-10-09 10:43:17 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:43:17.140012 | orchestrator | 2025-10-09 10:43:17 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:17.142878 | orchestrator | 2025-10-09 10:43:17 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:17.145367 | orchestrator | 2025-10-09 10:43:17 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:17.145582 | orchestrator | 2025-10-09 10:43:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:20.186897 | orchestrator | 2025-10-09 10:43:20 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:43:20.186997 | orchestrator | 2025-10-09 10:43:20 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:20.188439 | orchestrator | 2025-10-09 10:43:20 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:20.190205 | orchestrator | 2025-10-09 10:43:20 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:20.190283 | orchestrator | 2025-10-09 10:43:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:23.248482 | orchestrator | 2025-10-09 10:43:23 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state STARTED 2025-10-09 10:43:23.250540 | orchestrator | 2025-10-09 10:43:23 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:23.252453 | orchestrator | 2025-10-09 10:43:23 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:23.254660 | orchestrator | 2025-10-09 10:43:23 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:23.255361 | orchestrator | 2025-10-09 10:43:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:26.292429 | orchestrator | 2025-10-09 10:43:26 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:26.295390 | orchestrator | 2025-10-09 10:43:26 | INFO  | Task 75ae2e6a-c1b5-4b9d-801b-1588136efac5 is in state SUCCESS 2025-10-09 10:43:26.295717 | orchestrator | 2025-10-09 10:43:26.295749 | orchestrator | 2025-10-09 10:43:26.295761 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-10-09 10:43:26.295799 | orchestrator | 2025-10-09 10:43:26.295811 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-10-09 10:43:26.295822 | orchestrator | Thursday 09 October 2025 10:41:35 +0000 (0:00:00.379) 0:00:00.379 ****** 2025-10-09 10:43:26.295834 | orchestrator | changed: [localhost] 2025-10-09 10:43:26.295846 | orchestrator | 2025-10-09 10:43:26.295857 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-10-09 10:43:26.295872 | orchestrator | Thursday 09 October 2025 10:41:36 +0000 (0:00:01.262) 0:00:01.641 ****** 2025-10-09 10:43:26.295891 | orchestrator | changed: [localhost] 2025-10-09 10:43:26.295910 | orchestrator | 2025-10-09 10:43:26.295929 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-10-09 10:43:26.295948 | orchestrator | Thursday 09 October 2025 10:42:07 +0000 (0:00:31.057) 0:00:32.699 ****** 2025-10-09 10:43:26.295967 | orchestrator | changed: [localhost] 2025-10-09 10:43:26.295987 | orchestrator | 2025-10-09 10:43:26.296007 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:43:26.296025 | orchestrator | 2025-10-09 10:43:26.296043 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:43:26.296062 | orchestrator | Thursday 09 October 2025 10:42:13 +0000 (0:00:05.725) 0:00:38.425 ****** 2025-10-09 10:43:26.296081 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:43:26.296100 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:43:26.296120 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:43:26.296170 | orchestrator | 2025-10-09 10:43:26.296183 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:43:26.296194 | orchestrator | Thursday 09 October 2025 10:42:13 +0000 (0:00:00.356) 0:00:38.781 ****** 2025-10-09 10:43:26.296205 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-10-09 10:43:26.296216 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-10-09 10:43:26.296227 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-10-09 10:43:26.296237 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-10-09 10:43:26.296248 | orchestrator | 2025-10-09 10:43:26.296259 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-10-09 10:43:26.296270 | orchestrator | skipping: no hosts matched 2025-10-09 10:43:26.296281 | orchestrator | 2025-10-09 10:43:26.296292 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:43:26.296347 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:43:26.296365 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:43:26.296380 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:43:26.296393 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:43:26.296404 | orchestrator | 2025-10-09 10:43:26.296417 | orchestrator | 2025-10-09 10:43:26.296428 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:43:26.296441 | orchestrator | Thursday 09 October 2025 10:42:14 +0000 (0:00:00.683) 0:00:39.465 ****** 2025-10-09 10:43:26.296453 | orchestrator | =============================================================================== 2025-10-09 10:43:26.296527 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 31.06s 2025-10-09 10:43:26.296541 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.73s 2025-10-09 10:43:26.296554 | orchestrator | Ensure the destination directory exists --------------------------------- 1.26s 2025-10-09 10:43:26.296565 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-10-09 10:43:26.296578 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-10-09 10:43:26.296601 | orchestrator | 2025-10-09 10:43:26.297712 | orchestrator | 2025-10-09 10:43:26.297742 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:43:26.297801 | orchestrator | 2025-10-09 10:43:26.297813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:43:26.297824 | orchestrator | Thursday 09 October 2025 10:38:38 +0000 (0:00:00.274) 0:00:00.274 ****** 2025-10-09 10:43:26.297834 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:43:26.297846 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:43:26.297857 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:43:26.297899 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:43:26.297911 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:43:26.297922 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:43:26.297933 | orchestrator | 2025-10-09 10:43:26.297943 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:43:26.297954 | orchestrator | Thursday 09 October 2025 10:38:39 +0000 (0:00:00.606) 0:00:00.881 ****** 2025-10-09 10:43:26.297965 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-10-09 10:43:26.297997 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-10-09 10:43:26.298009 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-10-09 10:43:26.298064 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-10-09 10:43:26.298076 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-10-09 10:43:26.298087 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-10-09 10:43:26.298097 | orchestrator | 2025-10-09 10:43:26.298118 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-10-09 10:43:26.298129 | orchestrator | 2025-10-09 10:43:26.298165 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-09 10:43:26.298176 | orchestrator | Thursday 09 October 2025 10:38:39 +0000 (0:00:00.589) 0:00:01.470 ****** 2025-10-09 10:43:26.298187 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:43:26.298199 | orchestrator | 2025-10-09 10:43:26.298209 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-10-09 10:43:26.298220 | orchestrator | Thursday 09 October 2025 10:38:41 +0000 (0:00:01.158) 0:00:02.629 ****** 2025-10-09 10:43:26.298231 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:43:26.298241 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:43:26.298252 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:43:26.298262 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:43:26.298317 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:43:26.298330 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:43:26.298342 | orchestrator | 2025-10-09 10:43:26.298354 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-10-09 10:43:26.298367 | orchestrator | Thursday 09 October 2025 10:38:42 +0000 (0:00:01.258) 0:00:03.887 ****** 2025-10-09 10:43:26.298378 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:43:26.298390 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:43:26.298402 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:43:26.298414 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:43:26.298426 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:43:26.298438 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:43:26.298450 | orchestrator | 2025-10-09 10:43:26.298462 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-10-09 10:43:26.298474 | orchestrator | Thursday 09 October 2025 10:38:43 +0000 (0:00:01.073) 0:00:04.961 ****** 2025-10-09 10:43:26.298486 | orchestrator | ok: [testbed-node-0] => { 2025-10-09 10:43:26.298499 | orchestrator |  "changed": false, 2025-10-09 10:43:26.298510 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:43:26.298523 | orchestrator | } 2025-10-09 10:43:26.298537 | orchestrator | ok: [testbed-node-1] => { 2025-10-09 10:43:26.298549 | orchestrator |  "changed": false, 2025-10-09 10:43:26.298561 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:43:26.298584 | orchestrator | } 2025-10-09 10:43:26.298596 | orchestrator | ok: [testbed-node-2] => { 2025-10-09 10:43:26.298609 | orchestrator |  "changed": false, 2025-10-09 10:43:26.298621 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:43:26.298633 | orchestrator | } 2025-10-09 10:43:26.298656 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:43:26.298667 | orchestrator |  "changed": false, 2025-10-09 10:43:26.298678 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:43:26.298689 | orchestrator | } 2025-10-09 10:43:26.298699 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:43:26.298710 | orchestrator |  "changed": false, 2025-10-09 10:43:26.298721 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:43:26.298732 | orchestrator | } 2025-10-09 10:43:26.298742 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:43:26.298753 | orchestrator |  "changed": false, 2025-10-09 10:43:26.298763 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:43:26.298774 | orchestrator | } 2025-10-09 10:43:26.298785 | orchestrator | 2025-10-09 10:43:26.298795 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-10-09 10:43:26.298806 | orchestrator | Thursday 09 October 2025 10:38:44 +0000 (0:00:00.722) 0:00:05.683 ****** 2025-10-09 10:43:26.298817 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.298827 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.298838 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.298848 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.298859 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.298869 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.298880 | orchestrator | 2025-10-09 10:43:26.298891 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-10-09 10:43:26.298901 | orchestrator | Thursday 09 October 2025 10:38:44 +0000 (0:00:00.618) 0:00:06.302 ****** 2025-10-09 10:43:26.298912 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-10-09 10:43:26.298923 | orchestrator | 2025-10-09 10:43:26.298934 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-10-09 10:43:26.298944 | orchestrator | Thursday 09 October 2025 10:38:48 +0000 (0:00:03.468) 0:00:09.771 ****** 2025-10-09 10:43:26.298955 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-10-09 10:43:26.298966 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-10-09 10:43:26.298977 | orchestrator | 2025-10-09 10:43:26.299002 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-10-09 10:43:26.299013 | orchestrator | Thursday 09 October 2025 10:38:55 +0000 (0:00:06.880) 0:00:16.652 ****** 2025-10-09 10:43:26.299024 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:43:26.299035 | orchestrator | 2025-10-09 10:43:26.299046 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-10-09 10:43:26.299056 | orchestrator | Thursday 09 October 2025 10:38:58 +0000 (0:00:03.399) 0:00:20.051 ****** 2025-10-09 10:43:26.299067 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:43:26.299078 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-10-09 10:43:26.299088 | orchestrator | 2025-10-09 10:43:26.299099 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-10-09 10:43:26.299110 | orchestrator | Thursday 09 October 2025 10:39:02 +0000 (0:00:04.036) 0:00:24.087 ****** 2025-10-09 10:43:26.299120 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:43:26.299131 | orchestrator | 2025-10-09 10:43:26.299160 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-10-09 10:43:26.299171 | orchestrator | Thursday 09 October 2025 10:39:06 +0000 (0:00:03.628) 0:00:27.716 ****** 2025-10-09 10:43:26.299181 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-10-09 10:43:26.299192 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-10-09 10:43:26.299210 | orchestrator | 2025-10-09 10:43:26.299227 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-09 10:43:26.299238 | orchestrator | Thursday 09 October 2025 10:39:14 +0000 (0:00:08.264) 0:00:35.980 ****** 2025-10-09 10:43:26.299249 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.299259 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.299270 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.299281 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.299291 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.299302 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.299313 | orchestrator | 2025-10-09 10:43:26.299323 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-10-09 10:43:26.299334 | orchestrator | Thursday 09 October 2025 10:39:15 +0000 (0:00:00.910) 0:00:36.890 ****** 2025-10-09 10:43:26.299345 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.299355 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.299366 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.299376 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.299387 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.299398 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.299409 | orchestrator | 2025-10-09 10:43:26.299419 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-10-09 10:43:26.299430 | orchestrator | Thursday 09 October 2025 10:39:17 +0000 (0:00:02.155) 0:00:39.045 ****** 2025-10-09 10:43:26.299441 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:43:26.299452 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:43:26.299462 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:43:26.299473 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:43:26.299484 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:43:26.299494 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:43:26.299505 | orchestrator | 2025-10-09 10:43:26.299516 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-10-09 10:43:26.299526 | orchestrator | Thursday 09 October 2025 10:39:18 +0000 (0:00:01.151) 0:00:40.197 ****** 2025-10-09 10:43:26.299537 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.299548 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.299559 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.299569 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.299580 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.299590 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.299601 | orchestrator | 2025-10-09 10:43:26.299612 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-10-09 10:43:26.299622 | orchestrator | Thursday 09 October 2025 10:39:20 +0000 (0:00:02.259) 0:00:42.456 ****** 2025-10-09 10:43:26.299637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.299663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.299687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.299700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.299713 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.299724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.299736 | orchestrator | 2025-10-09 10:43:26.299747 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-10-09 10:43:26.299765 | orchestrator | Thursday 09 October 2025 10:39:24 +0000 (0:00:03.651) 0:00:46.108 ****** 2025-10-09 10:43:26.299776 | orchestrator | [WARNING]: Skipped 2025-10-09 10:43:26.299787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-10-09 10:43:26.299798 | orchestrator | due to this access issue: 2025-10-09 10:43:26.299809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-10-09 10:43:26.299819 | orchestrator | a directory 2025-10-09 10:43:26.299830 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:43:26.299841 | orchestrator | 2025-10-09 10:43:26.299857 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-09 10:43:26.299868 | orchestrator | Thursday 09 October 2025 10:39:25 +0000 (0:00:00.957) 0:00:47.065 ****** 2025-10-09 10:43:26.299879 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:43:26.299902 | orchestrator | 2025-10-09 10:43:26.299921 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-10-09 10:43:26.299939 | orchestrator | Thursday 09 October 2025 10:39:26 +0000 (0:00:01.405) 0:00:48.471 ****** 2025-10-09 10:43:26.299964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.299984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.300004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.300024 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.300067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.300097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.300118 | orchestrator | 2025-10-09 10:43:26.300160 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-10-09 10:43:26.300174 | orchestrator | Thursday 09 October 2025 10:39:30 +0000 (0:00:03.661) 0:00:52.132 ****** 2025-10-09 10:43:26.300192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.300212 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.300231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.300262 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.300284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.300306 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.300336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.300349 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.300373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.300385 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.300396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.300407 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.300426 | orchestrator | 2025-10-09 10:43:26.300437 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-10-09 10:43:26.300448 | orchestrator | Thursday 09 October 2025 10:39:34 +0000 (0:00:03.870) 0:00:56.002 ****** 2025-10-09 10:43:26.300459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.300470 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.300488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.300500 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.300516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.300527 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.300539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.300558 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.300569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.300581 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.300592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.300603 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.300614 | orchestrator | 2025-10-09 10:43:26.300632 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-10-09 10:43:26.300657 | orchestrator | Thursday 09 October 2025 10:39:38 +0000 (0:00:03.787) 0:00:59.790 ****** 2025-10-09 10:43:26.300677 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.300696 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.300713 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.300733 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.300753 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.300771 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.300782 | orchestrator | 2025-10-09 10:43:26.300793 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-10-09 10:43:26.300804 | orchestrator | Thursday 09 October 2025 10:39:41 +0000 (0:00:03.382) 0:01:03.173 ****** 2025-10-09 10:43:26.300815 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.300825 | orchestrator | 2025-10-09 10:43:26.300836 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-10-09 10:43:26.300846 | orchestrator | Thursday 09 October 2025 10:39:41 +0000 (0:00:00.120) 0:01:03.294 ****** 2025-10-09 10:43:26.300857 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.300868 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.300879 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.300889 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.300900 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.300910 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.300921 | orchestrator | 2025-10-09 10:43:26.300932 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-10-09 10:43:26.300948 | orchestrator | Thursday 09 October 2025 10:39:42 +0000 (0:00:00.719) 0:01:04.013 ****** 2025-10-09 10:43:26.300960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.300982 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.300994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.301005 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.301016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.301027 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.301520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.301541 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.301558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.301577 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.301587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.301597 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.301607 | orchestrator | 2025-10-09 10:43:26.301616 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-10-09 10:43:26.301626 | orchestrator | Thursday 09 October 2025 10:39:45 +0000 (0:00:02.686) 0:01:06.700 ****** 2025-10-09 10:43:26.301636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.301653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.301664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.301685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.301696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.301706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.301716 | orchestrator | 2025-10-09 10:43:26.301726 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-10-09 10:43:26.301735 | orchestrator | Thursday 09 October 2025 10:39:50 +0000 (0:00:05.374) 0:01:12.074 ****** 2025-10-09 10:43:26.301751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.301761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.301783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.301793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.301804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.301819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.301829 | orchestrator | 2025-10-09 10:43:26.301839 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-10-09 10:43:26.301849 | orchestrator | Thursday 09 October 2025 10:39:57 +0000 (0:00:07.085) 0:01:19.160 ****** 2025-10-09 10:43:26.301863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.301901 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.301912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.301922 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.301941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.301952 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.301962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.301972 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.301987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.302003 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.302077 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302087 | orchestrator | 2025-10-09 10:43:26.302097 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-10-09 10:43:26.302107 | orchestrator | Thursday 09 October 2025 10:40:00 +0000 (0:00:03.120) 0:01:22.280 ****** 2025-10-09 10:43:26.302118 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.302130 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302157 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302168 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:26.302179 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:26.302191 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:26.302202 | orchestrator | 2025-10-09 10:43:26.302213 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-10-09 10:43:26.302225 | orchestrator | Thursday 09 October 2025 10:40:03 +0000 (0:00:03.251) 0:01:25.531 ****** 2025-10-09 10:43:26.302237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.302249 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.302260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.302273 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.302312 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.302342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.302354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.302366 | orchestrator | 2025-10-09 10:43:26.302378 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-10-09 10:43:26.302389 | orchestrator | Thursday 09 October 2025 10:40:10 +0000 (0:00:06.101) 0:01:31.633 ****** 2025-10-09 10:43:26.302400 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.302411 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.302423 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.302434 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.302456 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302468 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302479 | orchestrator | 2025-10-09 10:43:26.302490 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-10-09 10:43:26.302500 | orchestrator | Thursday 09 October 2025 10:40:13 +0000 (0:00:03.089) 0:01:34.723 ****** 2025-10-09 10:43:26.302510 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.302520 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.302530 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302540 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.302550 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.302560 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302570 | orchestrator | 2025-10-09 10:43:26.302580 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-10-09 10:43:26.302591 | orchestrator | Thursday 09 October 2025 10:40:16 +0000 (0:00:03.294) 0:01:38.017 ****** 2025-10-09 10:43:26.302606 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.302616 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302626 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.302636 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.302646 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.302656 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302666 | orchestrator | 2025-10-09 10:43:26.302676 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-10-09 10:43:26.302686 | orchestrator | Thursday 09 October 2025 10:40:20 +0000 (0:00:03.576) 0:01:41.593 ****** 2025-10-09 10:43:26.302696 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.302705 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.302715 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.302725 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302735 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.302744 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302754 | orchestrator | 2025-10-09 10:43:26.302764 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-10-09 10:43:26.302774 | orchestrator | Thursday 09 October 2025 10:40:23 +0000 (0:00:03.842) 0:01:45.435 ****** 2025-10-09 10:43:26.302784 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.302794 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.302804 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.302814 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302824 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.302833 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302843 | orchestrator | 2025-10-09 10:43:26.302857 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-10-09 10:43:26.302867 | orchestrator | Thursday 09 October 2025 10:40:27 +0000 (0:00:04.034) 0:01:49.470 ****** 2025-10-09 10:43:26.302877 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.302887 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.302897 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.302907 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.302917 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.302927 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.302937 | orchestrator | 2025-10-09 10:43:26.302947 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-10-09 10:43:26.302958 | orchestrator | Thursday 09 October 2025 10:40:31 +0000 (0:00:03.419) 0:01:52.890 ****** 2025-10-09 10:43:26.302968 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:43:26.302978 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.302988 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:43:26.302998 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303008 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:43:26.303025 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303035 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:43:26.303045 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.303055 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:43:26.303066 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.303076 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:43:26.303086 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.303096 | orchestrator | 2025-10-09 10:43:26.303106 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-10-09 10:43:26.303116 | orchestrator | Thursday 09 October 2025 10:40:34 +0000 (0:00:03.344) 0:01:56.234 ****** 2025-10-09 10:43:26.303126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.303151 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.303168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.303179 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.303205 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.303232 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.303242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.303253 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.303264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.303275 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.303285 | orchestrator | 2025-10-09 10:43:26.303295 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-10-09 10:43:26.303305 | orchestrator | Thursday 09 October 2025 10:40:39 +0000 (0:00:04.642) 0:02:00.877 ****** 2025-10-09 10:43:26.303321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.303332 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.303363 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.303385 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.303395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.303406 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.303421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.303432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.303443 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.303458 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.303469 | orchestrator | 2025-10-09 10:43:26.303479 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-10-09 10:43:26.303493 | orchestrator | Thursday 09 October 2025 10:40:42 +0000 (0:00:03.523) 0:02:04.400 ****** 2025-10-09 10:43:26.303503 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.303513 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303523 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303533 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.303543 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.303553 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.303563 | orchestrator | 2025-10-09 10:43:26.303573 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-10-09 10:43:26.303583 | orchestrator | Thursday 09 October 2025 10:40:47 +0000 (0:00:04.688) 0:02:09.089 ****** 2025-10-09 10:43:26.303593 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303602 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.303612 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303622 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:43:26.303632 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:43:26.303642 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:43:26.303652 | orchestrator | 2025-10-09 10:43:26.303662 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-10-09 10:43:26.303672 | orchestrator | Thursday 09 October 2025 10:40:55 +0000 (0:00:07.539) 0:02:16.629 ****** 2025-10-09 10:43:26.303682 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.303692 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303702 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303711 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.303721 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.303731 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.303741 | orchestrator | 2025-10-09 10:43:26.303751 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-10-09 10:43:26.303761 | orchestrator | Thursday 09 October 2025 10:40:59 +0000 (0:00:04.597) 0:02:21.226 ****** 2025-10-09 10:43:26.303771 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303781 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.303791 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303801 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.303811 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.303821 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.303830 | orchestrator | 2025-10-09 10:43:26.303840 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-10-09 10:43:26.303850 | orchestrator | Thursday 09 October 2025 10:41:04 +0000 (0:00:04.477) 0:02:25.704 ****** 2025-10-09 10:43:26.303860 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.303870 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303880 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303890 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.303899 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.303909 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.303919 | orchestrator | 2025-10-09 10:43:26.303929 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-10-09 10:43:26.303939 | orchestrator | Thursday 09 October 2025 10:41:10 +0000 (0:00:05.856) 0:02:31.561 ****** 2025-10-09 10:43:26.303949 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.303959 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.303969 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.303978 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.303988 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.303998 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.304008 | orchestrator | 2025-10-09 10:43:26.304018 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-10-09 10:43:26.304035 | orchestrator | Thursday 09 October 2025 10:41:14 +0000 (0:00:04.027) 0:02:35.588 ****** 2025-10-09 10:43:26.304045 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.304055 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.304065 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.304075 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.304085 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.304095 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.304104 | orchestrator | 2025-10-09 10:43:26.304114 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-10-09 10:43:26.304124 | orchestrator | Thursday 09 October 2025 10:41:17 +0000 (0:00:03.665) 0:02:39.254 ****** 2025-10-09 10:43:26.304135 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.304170 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.304181 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.304191 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.304201 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.304211 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.304221 | orchestrator | 2025-10-09 10:43:26.304231 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-10-09 10:43:26.304246 | orchestrator | Thursday 09 October 2025 10:41:22 +0000 (0:00:04.589) 0:02:43.844 ****** 2025-10-09 10:43:26.304256 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.304266 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.304276 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.304286 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.304296 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.304306 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.304316 | orchestrator | 2025-10-09 10:43:26.304326 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-10-09 10:43:26.304336 | orchestrator | Thursday 09 October 2025 10:41:25 +0000 (0:00:03.357) 0:02:47.201 ****** 2025-10-09 10:43:26.304346 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:43:26.304356 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.304366 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:43:26.304376 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.304386 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:43:26.304396 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.304410 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:43:26.304421 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.304431 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:43:26.304441 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.304450 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:43:26.304460 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.304470 | orchestrator | 2025-10-09 10:43:26.304480 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-10-09 10:43:26.304490 | orchestrator | Thursday 09 October 2025 10:41:29 +0000 (0:00:03.995) 0:02:51.197 ****** 2025-10-09 10:43:26.304501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.304519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.304530 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.304540 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.304555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.304566 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.304583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:43:26.304595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.304612 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.304622 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.304632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:43:26.304642 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.304653 | orchestrator | 2025-10-09 10:43:26.304663 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-10-09 10:43:26.304673 | orchestrator | Thursday 09 October 2025 10:41:32 +0000 (0:00:02.891) 0:02:54.088 ****** 2025-10-09 10:43:26.304683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.304700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.304716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.304727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:43:26.304744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.304755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:43:26.304766 | orchestrator | 2025-10-09 10:43:26.304776 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-09 10:43:26.304786 | orchestrator | Thursday 09 October 2025 10:41:37 +0000 (0:00:04.739) 0:02:58.827 ****** 2025-10-09 10:43:26.304796 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:26.304806 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:26.304817 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:26.304826 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:43:26.304836 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:43:26.304850 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:43:26.304861 | orchestrator | 2025-10-09 10:43:26.304871 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-10-09 10:43:26.304881 | orchestrator | Thursday 09 October 2025 10:41:37 +0000 (0:00:00.609) 0:02:59.437 ****** 2025-10-09 10:43:26.304891 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:26.304901 | orchestrator | 2025-10-09 10:43:26.304911 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-10-09 10:43:26.304921 | orchestrator | Thursday 09 October 2025 10:41:40 +0000 (0:00:02.384) 0:03:01.822 ****** 2025-10-09 10:43:26.304931 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:26.304941 | orchestrator | 2025-10-09 10:43:26.304951 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-10-09 10:43:26.304961 | orchestrator | Thursday 09 October 2025 10:41:42 +0000 (0:00:02.084) 0:03:03.906 ****** 2025-10-09 10:43:26.304971 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:26.304981 | orchestrator | 2025-10-09 10:43:26.304991 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:43:26.305008 | orchestrator | Thursday 09 October 2025 10:42:29 +0000 (0:00:47.123) 0:03:51.030 ****** 2025-10-09 10:43:26.305018 | orchestrator | 2025-10-09 10:43:26.305028 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:43:26.305038 | orchestrator | Thursday 09 October 2025 10:42:29 +0000 (0:00:00.071) 0:03:51.101 ****** 2025-10-09 10:43:26.305048 | orchestrator | 2025-10-09 10:43:26.305062 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:43:26.305072 | orchestrator | Thursday 09 October 2025 10:42:29 +0000 (0:00:00.202) 0:03:51.303 ****** 2025-10-09 10:43:26.305082 | orchestrator | 2025-10-09 10:43:26.305092 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:43:26.305102 | orchestrator | Thursday 09 October 2025 10:42:29 +0000 (0:00:00.063) 0:03:51.366 ****** 2025-10-09 10:43:26.305112 | orchestrator | 2025-10-09 10:43:26.305122 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:43:26.305132 | orchestrator | Thursday 09 October 2025 10:42:29 +0000 (0:00:00.064) 0:03:51.431 ****** 2025-10-09 10:43:26.305181 | orchestrator | 2025-10-09 10:43:26.305192 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:43:26.305202 | orchestrator | Thursday 09 October 2025 10:42:29 +0000 (0:00:00.067) 0:03:51.498 ****** 2025-10-09 10:43:26.305212 | orchestrator | 2025-10-09 10:43:26.305222 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-10-09 10:43:26.305232 | orchestrator | Thursday 09 October 2025 10:42:30 +0000 (0:00:00.079) 0:03:51.578 ****** 2025-10-09 10:43:26.305242 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:26.305252 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:26.305261 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:26.305271 | orchestrator | 2025-10-09 10:43:26.305281 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-10-09 10:43:26.305291 | orchestrator | Thursday 09 October 2025 10:42:56 +0000 (0:00:26.549) 0:04:18.128 ****** 2025-10-09 10:43:26.305301 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:43:26.305311 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:43:26.305320 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:43:26.305330 | orchestrator | 2025-10-09 10:43:26.305340 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:43:26.305350 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:43:26.305361 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-10-09 10:43:26.305371 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-10-09 10:43:26.305381 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:43:26.305391 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:43:26.305401 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:43:26.305411 | orchestrator | 2025-10-09 10:43:26.305421 | orchestrator | 2025-10-09 10:43:26.305431 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:43:26.305441 | orchestrator | Thursday 09 October 2025 10:43:24 +0000 (0:00:27.838) 0:04:45.967 ****** 2025-10-09 10:43:26.305451 | orchestrator | =============================================================================== 2025-10-09 10:43:26.305460 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.12s 2025-10-09 10:43:26.305470 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 27.84s 2025-10-09 10:43:26.305486 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.55s 2025-10-09 10:43:26.305496 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.26s 2025-10-09 10:43:26.305506 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.54s 2025-10-09 10:43:26.305516 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.09s 2025-10-09 10:43:26.305526 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.88s 2025-10-09 10:43:26.305536 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 6.10s 2025-10-09 10:43:26.305551 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 5.86s 2025-10-09 10:43:26.305561 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.37s 2025-10-09 10:43:26.305571 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.74s 2025-10-09 10:43:26.305581 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.69s 2025-10-09 10:43:26.305591 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.64s 2025-10-09 10:43:26.305601 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.60s 2025-10-09 10:43:26.305611 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.59s 2025-10-09 10:43:26.305621 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.48s 2025-10-09 10:43:26.305631 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.04s 2025-10-09 10:43:26.305641 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.03s 2025-10-09 10:43:26.305651 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.03s 2025-10-09 10:43:26.305661 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.00s 2025-10-09 10:43:26.305675 | orchestrator | 2025-10-09 10:43:26 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:26.305686 | orchestrator | 2025-10-09 10:43:26 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:26.305695 | orchestrator | 2025-10-09 10:43:26 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state STARTED 2025-10-09 10:43:26.305704 | orchestrator | 2025-10-09 10:43:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:29.347944 | orchestrator | 2025-10-09 10:43:29 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:29.348712 | orchestrator | 2025-10-09 10:43:29 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:29.349919 | orchestrator | 2025-10-09 10:43:29 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:29.354680 | orchestrator | 2025-10-09 10:43:29.354704 | orchestrator | 2025-10-09 10:43:29 | INFO  | Task 44c57862-64c4-48e3-b0dd-5357714826ca is in state SUCCESS 2025-10-09 10:43:29.358361 | orchestrator | 2025-10-09 10:43:29.358384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:43:29.358395 | orchestrator | 2025-10-09 10:43:29.358404 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:43:29.358413 | orchestrator | Thursday 09 October 2025 10:39:55 +0000 (0:00:01.145) 0:00:01.145 ****** 2025-10-09 10:43:29.358422 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:43:29.358432 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:43:29.358441 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:43:29.358450 | orchestrator | 2025-10-09 10:43:29.358459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:43:29.358468 | orchestrator | Thursday 09 October 2025 10:39:56 +0000 (0:00:00.892) 0:00:02.037 ****** 2025-10-09 10:43:29.358478 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-10-09 10:43:29.358506 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-10-09 10:43:29.358515 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-10-09 10:43:29.358524 | orchestrator | 2025-10-09 10:43:29.358533 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-10-09 10:43:29.358542 | orchestrator | 2025-10-09 10:43:29.358550 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-09 10:43:29.358560 | orchestrator | Thursday 09 October 2025 10:39:56 +0000 (0:00:00.707) 0:00:02.745 ****** 2025-10-09 10:43:29.358568 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:43:29.358578 | orchestrator | 2025-10-09 10:43:29.358587 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-10-09 10:43:29.358596 | orchestrator | Thursday 09 October 2025 10:39:57 +0000 (0:00:00.630) 0:00:03.375 ****** 2025-10-09 10:43:29.358605 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-10-09 10:43:29.358614 | orchestrator | 2025-10-09 10:43:29.358622 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-10-09 10:43:29.358631 | orchestrator | Thursday 09 October 2025 10:40:01 +0000 (0:00:03.603) 0:00:06.979 ****** 2025-10-09 10:43:29.358640 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-10-09 10:43:29.358648 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-10-09 10:43:29.358657 | orchestrator | 2025-10-09 10:43:29.358666 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-10-09 10:43:29.358675 | orchestrator | Thursday 09 October 2025 10:40:08 +0000 (0:00:07.198) 0:00:14.177 ****** 2025-10-09 10:43:29.358683 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:43:29.358692 | orchestrator | 2025-10-09 10:43:29.358701 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-10-09 10:43:29.358710 | orchestrator | Thursday 09 October 2025 10:40:12 +0000 (0:00:03.685) 0:00:17.862 ****** 2025-10-09 10:43:29.358718 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:43:29.358727 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-10-09 10:43:29.358735 | orchestrator | 2025-10-09 10:43:29.358744 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-10-09 10:43:29.358752 | orchestrator | Thursday 09 October 2025 10:40:16 +0000 (0:00:04.692) 0:00:22.555 ****** 2025-10-09 10:43:29.358761 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:43:29.358770 | orchestrator | 2025-10-09 10:43:29.358779 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-10-09 10:43:29.358787 | orchestrator | Thursday 09 October 2025 10:40:20 +0000 (0:00:04.060) 0:00:26.616 ****** 2025-10-09 10:43:29.358796 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-10-09 10:43:29.358804 | orchestrator | 2025-10-09 10:43:29.358813 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-10-09 10:43:29.358822 | orchestrator | Thursday 09 October 2025 10:40:26 +0000 (0:00:05.607) 0:00:32.223 ****** 2025-10-09 10:43:29.358848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.358883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.358895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.358905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.358915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.358929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.358939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.358962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.358974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.358985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.358995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359092 | orchestrator | 2025-10-09 10:43:29.359102 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-10-09 10:43:29.359112 | orchestrator | Thursday 09 October 2025 10:40:30 +0000 (0:00:04.148) 0:00:36.372 ****** 2025-10-09 10:43:29.359122 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:29.359132 | orchestrator | 2025-10-09 10:43:29.359184 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-10-09 10:43:29.359196 | orchestrator | Thursday 09 October 2025 10:40:30 +0000 (0:00:00.158) 0:00:36.531 ****** 2025-10-09 10:43:29.359205 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:29.359221 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:29.359230 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:29.359240 | orchestrator | 2025-10-09 10:43:29.359250 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-09 10:43:29.359259 | orchestrator | Thursday 09 October 2025 10:40:31 +0000 (0:00:00.383) 0:00:36.915 ****** 2025-10-09 10:43:29.359269 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:43:29.359279 | orchestrator | 2025-10-09 10:43:29.359293 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-10-09 10:43:29.359303 | orchestrator | Thursday 09 October 2025 10:40:31 +0000 (0:00:00.903) 0:00:37.819 ****** 2025-10-09 10:43:29.359319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.359329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.359339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.359348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.359523 | orchestrator | 2025-10-09 10:43:29.359532 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-10-09 10:43:29.359541 | orchestrator | Thursday 09 October 2025 10:40:40 +0000 (0:00:08.899) 0:00:46.718 ****** 2025-10-09 10:43:29.359559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.359573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.359583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359625 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:29.359639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.359654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.359664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359706 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:29.359719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.359735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.359744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359787 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:29.359796 | orchestrator | 2025-10-09 10:43:29.359804 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-10-09 10:43:29.359813 | orchestrator | Thursday 09 October 2025 10:40:42 +0000 (0:00:01.458) 0:00:48.177 ****** 2025-10-09 10:43:29.359826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.359841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.359851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359892 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:29.359905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.359920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.359929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.359972 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:29.359985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.360000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.360010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360052 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:29.360061 | orchestrator | 2025-10-09 10:43:29.360070 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-10-09 10:43:29.360079 | orchestrator | Thursday 09 October 2025 10:40:46 +0000 (0:00:04.454) 0:00:52.632 ****** 2025-10-09 10:43:29.360092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.360119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.360135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.360162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360336 | orchestrator | 2025-10-09 10:43:29.360345 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-10-09 10:43:29.360354 | orchestrator | Thursday 09 October 2025 10:40:54 +0000 (0:00:07.657) 0:01:00.289 ****** 2025-10-09 10:43:29.360367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.360383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.360398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.360408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360646 | orchestrator | 2025-10-09 10:43:29.360662 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-10-09 10:43:29.360677 | orchestrator | Thursday 09 October 2025 10:41:23 +0000 (0:00:29.091) 0:01:29.381 ****** 2025-10-09 10:43:29.360693 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-09 10:43:29.360709 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-09 10:43:29.360724 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-09 10:43:29.360736 | orchestrator | 2025-10-09 10:43:29.360745 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-10-09 10:43:29.360754 | orchestrator | Thursday 09 October 2025 10:41:32 +0000 (0:00:08.616) 0:01:37.998 ****** 2025-10-09 10:43:29.360767 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-09 10:43:29.360783 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-09 10:43:29.360792 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-09 10:43:29.360800 | orchestrator | 2025-10-09 10:43:29.360809 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-10-09 10:43:29.360817 | orchestrator | Thursday 09 October 2025 10:41:37 +0000 (0:00:05.255) 0:01:43.253 ****** 2025-10-09 10:43:29.360835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.360845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.360857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.360871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.360968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.360994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361117 | orchestrator | 2025-10-09 10:43:29.361125 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-10-09 10:43:29.361135 | orchestrator | Thursday 09 October 2025 10:41:40 +0000 (0:00:03.063) 0:01:46.316 ****** 2025-10-09 10:43:29.361175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.361185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.361194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.361203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361376 | orchestrator | 2025-10-09 10:43:29.361389 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-09 10:43:29.361398 | orchestrator | Thursday 09 October 2025 10:41:43 +0000 (0:00:03.040) 0:01:49.357 ****** 2025-10-09 10:43:29.361407 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:29.361416 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:29.361425 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:29.361434 | orchestrator | 2025-10-09 10:43:29.361442 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-10-09 10:43:29.361451 | orchestrator | Thursday 09 October 2025 10:41:43 +0000 (0:00:00.356) 0:01:49.714 ****** 2025-10-09 10:43:29.361467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.361476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.361486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361534 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:29.361548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.361558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.361567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361612 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:29.361627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:43:29.361636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:43:29.361646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:43:29.361691 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:29.361700 | orchestrator | 2025-10-09 10:43:29.361709 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-10-09 10:43:29.361718 | orchestrator | Thursday 09 October 2025 10:41:45 +0000 (0:00:01.575) 0:01:51.289 ****** 2025-10-09 10:43:29.361733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.361743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.361757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:43:29.361766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:43:29.361935 | orchestrator | 2025-10-09 10:43:29.361948 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-09 10:43:29.361957 | orchestrator | Thursday 09 October 2025 10:41:51 +0000 (0:00:06.081) 0:01:57.371 ****** 2025-10-09 10:43:29.361966 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:29.361975 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:29.361983 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:29.361992 | orchestrator | 2025-10-09 10:43:29.362001 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-10-09 10:43:29.362010 | orchestrator | Thursday 09 October 2025 10:41:51 +0000 (0:00:00.402) 0:01:57.774 ****** 2025-10-09 10:43:29.362067 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-10-09 10:43:29.362078 | orchestrator | 2025-10-09 10:43:29.362087 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-10-09 10:43:29.362096 | orchestrator | Thursday 09 October 2025 10:41:54 +0000 (0:00:02.378) 0:02:00.152 ****** 2025-10-09 10:43:29.362104 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:43:29.362113 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-10-09 10:43:29.362122 | orchestrator | 2025-10-09 10:43:29.362130 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-10-09 10:43:29.362193 | orchestrator | Thursday 09 October 2025 10:41:56 +0000 (0:00:02.481) 0:02:02.634 ****** 2025-10-09 10:43:29.362209 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:29.362218 | orchestrator | 2025-10-09 10:43:29.362227 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-09 10:43:29.362236 | orchestrator | Thursday 09 October 2025 10:42:15 +0000 (0:00:18.851) 0:02:21.486 ****** 2025-10-09 10:43:29.362244 | orchestrator | 2025-10-09 10:43:29.362253 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-09 10:43:29.362261 | orchestrator | Thursday 09 October 2025 10:42:16 +0000 (0:00:00.434) 0:02:21.920 ****** 2025-10-09 10:43:29.362270 | orchestrator | 2025-10-09 10:43:29.362278 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-09 10:43:29.362287 | orchestrator | Thursday 09 October 2025 10:42:16 +0000 (0:00:00.075) 0:02:21.996 ****** 2025-10-09 10:43:29.362295 | orchestrator | 2025-10-09 10:43:29.362304 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-10-09 10:43:29.362313 | orchestrator | Thursday 09 October 2025 10:42:16 +0000 (0:00:00.068) 0:02:22.064 ****** 2025-10-09 10:43:29.362321 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:29.362330 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:29.362338 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:29.362347 | orchestrator | 2025-10-09 10:43:29.362356 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-10-09 10:43:29.362364 | orchestrator | Thursday 09 October 2025 10:42:26 +0000 (0:00:09.893) 0:02:31.958 ****** 2025-10-09 10:43:29.362373 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:29.362381 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:29.362390 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:29.362398 | orchestrator | 2025-10-09 10:43:29.362407 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-10-09 10:43:29.362415 | orchestrator | Thursday 09 October 2025 10:42:35 +0000 (0:00:09.125) 0:02:41.084 ****** 2025-10-09 10:43:29.362424 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:29.362432 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:29.362441 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:29.362450 | orchestrator | 2025-10-09 10:43:29.362458 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-10-09 10:43:29.362467 | orchestrator | Thursday 09 October 2025 10:42:42 +0000 (0:00:07.321) 0:02:48.405 ****** 2025-10-09 10:43:29.362475 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:29.362483 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:29.362491 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:29.362499 | orchestrator | 2025-10-09 10:43:29.362507 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-10-09 10:43:29.362514 | orchestrator | Thursday 09 October 2025 10:42:54 +0000 (0:00:11.748) 0:03:00.153 ****** 2025-10-09 10:43:29.362522 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:29.362530 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:29.362538 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:29.362546 | orchestrator | 2025-10-09 10:43:29.362554 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-10-09 10:43:29.362561 | orchestrator | Thursday 09 October 2025 10:43:08 +0000 (0:00:13.952) 0:03:14.106 ****** 2025-10-09 10:43:29.362569 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:29.362577 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:29.362584 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:29.362592 | orchestrator | 2025-10-09 10:43:29.362600 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-10-09 10:43:29.362608 | orchestrator | Thursday 09 October 2025 10:43:18 +0000 (0:00:10.570) 0:03:24.676 ****** 2025-10-09 10:43:29.362616 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:29.362623 | orchestrator | 2025-10-09 10:43:29.362631 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:43:29.362639 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:43:29.362654 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:43:29.362662 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:43:29.362670 | orchestrator | 2025-10-09 10:43:29.362677 | orchestrator | 2025-10-09 10:43:29.362685 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:43:29.362693 | orchestrator | Thursday 09 October 2025 10:43:26 +0000 (0:00:08.165) 0:03:32.842 ****** 2025-10-09 10:43:29.362705 | orchestrator | =============================================================================== 2025-10-09 10:43:29.362714 | orchestrator | designate : Copying over designate.conf -------------------------------- 29.09s 2025-10-09 10:43:29.362721 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.85s 2025-10-09 10:43:29.362729 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.95s 2025-10-09 10:43:29.362737 | orchestrator | designate : Restart designate-producer container ----------------------- 11.75s 2025-10-09 10:43:29.362745 | orchestrator | designate : Restart designate-worker container ------------------------- 10.57s 2025-10-09 10:43:29.362752 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.89s 2025-10-09 10:43:29.362760 | orchestrator | designate : Restart designate-api container ----------------------------- 9.13s 2025-10-09 10:43:29.362768 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.90s 2025-10-09 10:43:29.362776 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.62s 2025-10-09 10:43:29.362784 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.17s 2025-10-09 10:43:29.362796 | orchestrator | designate : Copying over config.json files for services ----------------- 7.65s 2025-10-09 10:43:29.362804 | orchestrator | designate : Restart designate-central container ------------------------- 7.32s 2025-10-09 10:43:29.362812 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.20s 2025-10-09 10:43:29.362820 | orchestrator | designate : Check designate containers ---------------------------------- 6.08s 2025-10-09 10:43:29.362828 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 5.61s 2025-10-09 10:43:29.362836 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.26s 2025-10-09 10:43:29.362843 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.69s 2025-10-09 10:43:29.362851 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 4.46s 2025-10-09 10:43:29.362859 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.15s 2025-10-09 10:43:29.362867 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.06s 2025-10-09 10:43:29.362875 | orchestrator | 2025-10-09 10:43:29 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:29.362883 | orchestrator | 2025-10-09 10:43:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:32.404767 | orchestrator | 2025-10-09 10:43:32 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:32.407433 | orchestrator | 2025-10-09 10:43:32 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:32.409705 | orchestrator | 2025-10-09 10:43:32 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:32.411691 | orchestrator | 2025-10-09 10:43:32 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:32.412447 | orchestrator | 2025-10-09 10:43:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:35.455505 | orchestrator | 2025-10-09 10:43:35 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:35.456392 | orchestrator | 2025-10-09 10:43:35 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:35.459446 | orchestrator | 2025-10-09 10:43:35 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:35.459553 | orchestrator | 2025-10-09 10:43:35 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:35.459632 | orchestrator | 2025-10-09 10:43:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:38.512068 | orchestrator | 2025-10-09 10:43:38 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:38.514716 | orchestrator | 2025-10-09 10:43:38 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:38.516782 | orchestrator | 2025-10-09 10:43:38 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state STARTED 2025-10-09 10:43:38.518529 | orchestrator | 2025-10-09 10:43:38 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:38.519928 | orchestrator | 2025-10-09 10:43:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:41.568871 | orchestrator | 2025-10-09 10:43:41 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:41.571361 | orchestrator | 2025-10-09 10:43:41 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:41.574629 | orchestrator | 2025-10-09 10:43:41 | INFO  | Task 6af1d365-eb21-440b-b6d7-8bbbeff3d84d is in state SUCCESS 2025-10-09 10:43:41.575176 | orchestrator | 2025-10-09 10:43:41.577489 | orchestrator | 2025-10-09 10:43:41.577542 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:43:41.577555 | orchestrator | 2025-10-09 10:43:41.577567 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:43:41.577595 | orchestrator | Thursday 09 October 2025 10:42:21 +0000 (0:00:00.375) 0:00:00.375 ****** 2025-10-09 10:43:41.577621 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:43:41.577644 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:43:41.577655 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:43:41.577666 | orchestrator | 2025-10-09 10:43:41.577677 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:43:41.577688 | orchestrator | Thursday 09 October 2025 10:42:22 +0000 (0:00:00.347) 0:00:00.722 ****** 2025-10-09 10:43:41.577699 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-10-09 10:43:41.577711 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-10-09 10:43:41.577721 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-10-09 10:43:41.577732 | orchestrator | 2025-10-09 10:43:41.577743 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-10-09 10:43:41.577753 | orchestrator | 2025-10-09 10:43:41.577764 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-09 10:43:41.577775 | orchestrator | Thursday 09 October 2025 10:42:22 +0000 (0:00:00.442) 0:00:01.164 ****** 2025-10-09 10:43:41.577785 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:43:41.577797 | orchestrator | 2025-10-09 10:43:41.577808 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-10-09 10:43:41.577818 | orchestrator | Thursday 09 October 2025 10:42:23 +0000 (0:00:00.995) 0:00:02.160 ****** 2025-10-09 10:43:41.577829 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-10-09 10:43:41.577840 | orchestrator | 2025-10-09 10:43:41.577851 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-10-09 10:43:41.577862 | orchestrator | Thursday 09 October 2025 10:42:27 +0000 (0:00:03.929) 0:00:06.090 ****** 2025-10-09 10:43:41.577897 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-10-09 10:43:41.577909 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-10-09 10:43:41.577920 | orchestrator | 2025-10-09 10:43:41.577931 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-10-09 10:43:41.577942 | orchestrator | Thursday 09 October 2025 10:42:34 +0000 (0:00:07.036) 0:00:13.126 ****** 2025-10-09 10:43:41.577952 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:43:41.577963 | orchestrator | 2025-10-09 10:43:41.577974 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-10-09 10:43:41.577985 | orchestrator | Thursday 09 October 2025 10:42:38 +0000 (0:00:03.888) 0:00:17.015 ****** 2025-10-09 10:43:41.577995 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:43:41.578006 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-10-09 10:43:41.578066 | orchestrator | 2025-10-09 10:43:41.578083 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-10-09 10:43:41.578095 | orchestrator | Thursday 09 October 2025 10:42:42 +0000 (0:00:04.152) 0:00:21.167 ****** 2025-10-09 10:43:41.578107 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:43:41.578119 | orchestrator | 2025-10-09 10:43:41.578131 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-10-09 10:43:41.578178 | orchestrator | Thursday 09 October 2025 10:42:46 +0000 (0:00:03.564) 0:00:24.732 ****** 2025-10-09 10:43:41.578190 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-10-09 10:43:41.578202 | orchestrator | 2025-10-09 10:43:41.578215 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-09 10:43:41.578227 | orchestrator | Thursday 09 October 2025 10:42:50 +0000 (0:00:04.224) 0:00:28.956 ****** 2025-10-09 10:43:41.578239 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:41.578252 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:41.578264 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:41.578275 | orchestrator | 2025-10-09 10:43:41.578288 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-10-09 10:43:41.578300 | orchestrator | Thursday 09 October 2025 10:42:50 +0000 (0:00:00.288) 0:00:29.244 ****** 2025-10-09 10:43:41.578316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578394 | orchestrator | 2025-10-09 10:43:41.578406 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-10-09 10:43:41.578419 | orchestrator | Thursday 09 October 2025 10:42:51 +0000 (0:00:00.848) 0:00:30.093 ****** 2025-10-09 10:43:41.578431 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:41.578442 | orchestrator | 2025-10-09 10:43:41.578453 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-10-09 10:43:41.578464 | orchestrator | Thursday 09 October 2025 10:42:51 +0000 (0:00:00.125) 0:00:30.218 ****** 2025-10-09 10:43:41.578474 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:41.578485 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:41.578496 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:41.578507 | orchestrator | 2025-10-09 10:43:41.578517 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-09 10:43:41.578528 | orchestrator | Thursday 09 October 2025 10:42:51 +0000 (0:00:00.426) 0:00:30.645 ****** 2025-10-09 10:43:41.578539 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:43:41.578550 | orchestrator | 2025-10-09 10:43:41.578560 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-10-09 10:43:41.578571 | orchestrator | Thursday 09 October 2025 10:42:52 +0000 (0:00:00.482) 0:00:31.127 ****** 2025-10-09 10:43:41.578582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578637 | orchestrator | 2025-10-09 10:43:41.578648 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-10-09 10:43:41.578659 | orchestrator | Thursday 09 October 2025 10:42:53 +0000 (0:00:01.519) 0:00:32.647 ****** 2025-10-09 10:43:41.578670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.578681 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:41.578692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.578704 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:41.578725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.578743 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:41.578754 | orchestrator | 2025-10-09 10:43:41.578765 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-10-09 10:43:41.578775 | orchestrator | Thursday 09 October 2025 10:42:54 +0000 (0:00:00.994) 0:00:33.641 ****** 2025-10-09 10:43:41.578786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.578797 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:41.578808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.578819 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:41.578830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.578841 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:41.578852 | orchestrator | 2025-10-09 10:43:41.578863 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-10-09 10:43:41.578874 | orchestrator | Thursday 09 October 2025 10:42:56 +0000 (0:00:01.426) 0:00:35.068 ****** 2025-10-09 10:43:41.578902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578938 | orchestrator | 2025-10-09 10:43:41.578948 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-10-09 10:43:41.578959 | orchestrator | Thursday 09 October 2025 10:42:58 +0000 (0:00:02.064) 0:00:37.132 ****** 2025-10-09 10:43:41.578970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.578981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.579011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.579023 | orchestrator | 2025-10-09 10:43:41.579034 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-10-09 10:43:41.579045 | orchestrator | Thursday 09 October 2025 10:43:02 +0000 (0:00:04.433) 0:00:41.565 ****** 2025-10-09 10:43:41.579056 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-09 10:43:41.579067 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-09 10:43:41.579078 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-09 10:43:41.579088 | orchestrator | 2025-10-09 10:43:41.579099 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-10-09 10:43:41.579110 | orchestrator | Thursday 09 October 2025 10:43:04 +0000 (0:00:01.680) 0:00:43.246 ****** 2025-10-09 10:43:41.579120 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:41.579131 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:41.579158 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:41.579169 | orchestrator | 2025-10-09 10:43:41.579180 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-10-09 10:43:41.579190 | orchestrator | Thursday 09 October 2025 10:43:06 +0000 (0:00:01.835) 0:00:45.082 ****** 2025-10-09 10:43:41.579201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.579219 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:43:41.579230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.579241 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:43:41.579264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:43:41.579276 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:43:41.579286 | orchestrator | 2025-10-09 10:43:41.579297 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-10-09 10:43:41.579308 | orchestrator | Thursday 09 October 2025 10:43:06 +0000 (0:00:00.496) 0:00:45.578 ****** 2025-10-09 10:43:41.579319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.579330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.579354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:43:41.579365 | orchestrator | 2025-10-09 10:43:41.579376 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-10-09 10:43:41.579387 | orchestrator | Thursday 09 October 2025 10:43:08 +0000 (0:00:01.236) 0:00:46.814 ****** 2025-10-09 10:43:41.579397 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:41.579408 | orchestrator | 2025-10-09 10:43:41.579419 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-10-09 10:43:41.579429 | orchestrator | Thursday 09 October 2025 10:43:10 +0000 (0:00:02.425) 0:00:49.240 ****** 2025-10-09 10:43:41.579440 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:41.579450 | orchestrator | 2025-10-09 10:43:41.579461 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-10-09 10:43:41.579472 | orchestrator | Thursday 09 October 2025 10:43:12 +0000 (0:00:02.065) 0:00:51.306 ****** 2025-10-09 10:43:41.579488 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:41.579499 | orchestrator | 2025-10-09 10:43:41.579510 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-09 10:43:41.579526 | orchestrator | Thursday 09 October 2025 10:43:27 +0000 (0:00:14.914) 0:01:06.220 ****** 2025-10-09 10:43:41.579537 | orchestrator | 2025-10-09 10:43:41.579548 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-09 10:43:41.579559 | orchestrator | Thursday 09 October 2025 10:43:27 +0000 (0:00:00.072) 0:01:06.293 ****** 2025-10-09 10:43:41.579569 | orchestrator | 2025-10-09 10:43:41.579580 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-09 10:43:41.579591 | orchestrator | Thursday 09 October 2025 10:43:27 +0000 (0:00:00.064) 0:01:06.357 ****** 2025-10-09 10:43:41.579601 | orchestrator | 2025-10-09 10:43:41.579612 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-10-09 10:43:41.579623 | orchestrator | Thursday 09 October 2025 10:43:27 +0000 (0:00:00.077) 0:01:06.435 ****** 2025-10-09 10:43:41.579634 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:43:41.579645 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:43:41.579655 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:43:41.579666 | orchestrator | 2025-10-09 10:43:41.579677 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:43:41.579688 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:43:41.579700 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:43:41.579711 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:43:41.579722 | orchestrator | 2025-10-09 10:43:41.579733 | orchestrator | 2025-10-09 10:43:41.579744 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:43:41.579760 | orchestrator | Thursday 09 October 2025 10:43:38 +0000 (0:00:10.960) 0:01:17.395 ****** 2025-10-09 10:43:41.579771 | orchestrator | =============================================================================== 2025-10-09 10:43:41.579782 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.91s 2025-10-09 10:43:41.579792 | orchestrator | placement : Restart placement-api container ---------------------------- 10.96s 2025-10-09 10:43:41.579803 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.04s 2025-10-09 10:43:41.579814 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.43s 2025-10-09 10:43:41.579824 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.22s 2025-10-09 10:43:41.579835 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.15s 2025-10-09 10:43:41.579846 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.93s 2025-10-09 10:43:41.579857 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.89s 2025-10-09 10:43:41.579867 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.56s 2025-10-09 10:43:41.579878 | orchestrator | placement : Creating placement databases -------------------------------- 2.43s 2025-10-09 10:43:41.579889 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.07s 2025-10-09 10:43:41.579899 | orchestrator | placement : Copying over config.json files for services ----------------- 2.06s 2025-10-09 10:43:41.579910 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.84s 2025-10-09 10:43:41.579921 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.68s 2025-10-09 10:43:41.579931 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.52s 2025-10-09 10:43:41.579942 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.43s 2025-10-09 10:43:41.579953 | orchestrator | placement : Check placement containers ---------------------------------- 1.24s 2025-10-09 10:43:41.579964 | orchestrator | placement : include_tasks ----------------------------------------------- 1.00s 2025-10-09 10:43:41.579974 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.99s 2025-10-09 10:43:41.579985 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.85s 2025-10-09 10:43:41.580399 | orchestrator | 2025-10-09 10:43:41 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:41.583458 | orchestrator | 2025-10-09 10:43:41 | INFO  | Task 0caed55e-6fe4-4ef5-b2d5-f55f74b94dda is in state STARTED 2025-10-09 10:43:41.584316 | orchestrator | 2025-10-09 10:43:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:44.628227 | orchestrator | 2025-10-09 10:43:44 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:44.629374 | orchestrator | 2025-10-09 10:43:44 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:44.631503 | orchestrator | 2025-10-09 10:43:44 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:44.633834 | orchestrator | 2025-10-09 10:43:44 | INFO  | Task 0caed55e-6fe4-4ef5-b2d5-f55f74b94dda is in state STARTED 2025-10-09 10:43:44.633862 | orchestrator | 2025-10-09 10:43:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:47.682428 | orchestrator | 2025-10-09 10:43:47 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:47.682937 | orchestrator | 2025-10-09 10:43:47 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:47.684247 | orchestrator | 2025-10-09 10:43:47 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:47.685158 | orchestrator | 2025-10-09 10:43:47 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:43:47.686092 | orchestrator | 2025-10-09 10:43:47 | INFO  | Task 0caed55e-6fe4-4ef5-b2d5-f55f74b94dda is in state SUCCESS 2025-10-09 10:43:47.686228 | orchestrator | 2025-10-09 10:43:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:50.735305 | orchestrator | 2025-10-09 10:43:50 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:50.737317 | orchestrator | 2025-10-09 10:43:50 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:50.741350 | orchestrator | 2025-10-09 10:43:50 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:50.743703 | orchestrator | 2025-10-09 10:43:50 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:43:50.745575 | orchestrator | 2025-10-09 10:43:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:53.785064 | orchestrator | 2025-10-09 10:43:53 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:53.788378 | orchestrator | 2025-10-09 10:43:53 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:53.790455 | orchestrator | 2025-10-09 10:43:53 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:53.791767 | orchestrator | 2025-10-09 10:43:53 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:43:53.791931 | orchestrator | 2025-10-09 10:43:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:56.844801 | orchestrator | 2025-10-09 10:43:56 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:56.847406 | orchestrator | 2025-10-09 10:43:56 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:56.849727 | orchestrator | 2025-10-09 10:43:56 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:56.852086 | orchestrator | 2025-10-09 10:43:56 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:43:56.852111 | orchestrator | 2025-10-09 10:43:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:59.889597 | orchestrator | 2025-10-09 10:43:59 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:43:59.889993 | orchestrator | 2025-10-09 10:43:59 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:43:59.891602 | orchestrator | 2025-10-09 10:43:59 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:43:59.895624 | orchestrator | 2025-10-09 10:43:59 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:43:59.895648 | orchestrator | 2025-10-09 10:43:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:02.934986 | orchestrator | 2025-10-09 10:44:02 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:02.937362 | orchestrator | 2025-10-09 10:44:02 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:02.939342 | orchestrator | 2025-10-09 10:44:02 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:02.941902 | orchestrator | 2025-10-09 10:44:02 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:02.942307 | orchestrator | 2025-10-09 10:44:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:05.988957 | orchestrator | 2025-10-09 10:44:05 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:05.989855 | orchestrator | 2025-10-09 10:44:05 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:05.991424 | orchestrator | 2025-10-09 10:44:05 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:05.992657 | orchestrator | 2025-10-09 10:44:05 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:05.993087 | orchestrator | 2025-10-09 10:44:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:09.036891 | orchestrator | 2025-10-09 10:44:09 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:09.038541 | orchestrator | 2025-10-09 10:44:09 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:09.041786 | orchestrator | 2025-10-09 10:44:09 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:09.045304 | orchestrator | 2025-10-09 10:44:09 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:09.045339 | orchestrator | 2025-10-09 10:44:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:12.084837 | orchestrator | 2025-10-09 10:44:12 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:12.086408 | orchestrator | 2025-10-09 10:44:12 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:12.088376 | orchestrator | 2025-10-09 10:44:12 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:12.090396 | orchestrator | 2025-10-09 10:44:12 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:12.090420 | orchestrator | 2025-10-09 10:44:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:15.123569 | orchestrator | 2025-10-09 10:44:15 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:15.125234 | orchestrator | 2025-10-09 10:44:15 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:15.127759 | orchestrator | 2025-10-09 10:44:15 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:15.129991 | orchestrator | 2025-10-09 10:44:15 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:15.130174 | orchestrator | 2025-10-09 10:44:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:18.167786 | orchestrator | 2025-10-09 10:44:18 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:18.171094 | orchestrator | 2025-10-09 10:44:18 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:18.174697 | orchestrator | 2025-10-09 10:44:18 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:18.176492 | orchestrator | 2025-10-09 10:44:18 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:18.176566 | orchestrator | 2025-10-09 10:44:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:21.225326 | orchestrator | 2025-10-09 10:44:21 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:21.226094 | orchestrator | 2025-10-09 10:44:21 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:21.226856 | orchestrator | 2025-10-09 10:44:21 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:21.227974 | orchestrator | 2025-10-09 10:44:21 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:21.228025 | orchestrator | 2025-10-09 10:44:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:24.257868 | orchestrator | 2025-10-09 10:44:24 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:24.258796 | orchestrator | 2025-10-09 10:44:24 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:24.259669 | orchestrator | 2025-10-09 10:44:24 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:24.260693 | orchestrator | 2025-10-09 10:44:24 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:24.260736 | orchestrator | 2025-10-09 10:44:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:27.298427 | orchestrator | 2025-10-09 10:44:27 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:27.299815 | orchestrator | 2025-10-09 10:44:27 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:27.301410 | orchestrator | 2025-10-09 10:44:27 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:27.303154 | orchestrator | 2025-10-09 10:44:27 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:27.303457 | orchestrator | 2025-10-09 10:44:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:30.343041 | orchestrator | 2025-10-09 10:44:30 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:30.343790 | orchestrator | 2025-10-09 10:44:30 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:30.345017 | orchestrator | 2025-10-09 10:44:30 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:30.346639 | orchestrator | 2025-10-09 10:44:30 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:30.346829 | orchestrator | 2025-10-09 10:44:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:33.407682 | orchestrator | 2025-10-09 10:44:33 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:33.408238 | orchestrator | 2025-10-09 10:44:33 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:33.409951 | orchestrator | 2025-10-09 10:44:33 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:33.412864 | orchestrator | 2025-10-09 10:44:33 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:33.414300 | orchestrator | 2025-10-09 10:44:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:36.449477 | orchestrator | 2025-10-09 10:44:36 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:36.449974 | orchestrator | 2025-10-09 10:44:36 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:36.451697 | orchestrator | 2025-10-09 10:44:36 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:36.453320 | orchestrator | 2025-10-09 10:44:36 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:36.453343 | orchestrator | 2025-10-09 10:44:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:39.497351 | orchestrator | 2025-10-09 10:44:39 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:39.499345 | orchestrator | 2025-10-09 10:44:39 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:39.501368 | orchestrator | 2025-10-09 10:44:39 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:39.503943 | orchestrator | 2025-10-09 10:44:39 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:39.503970 | orchestrator | 2025-10-09 10:44:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:42.549038 | orchestrator | 2025-10-09 10:44:42 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:42.551209 | orchestrator | 2025-10-09 10:44:42 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:42.553935 | orchestrator | 2025-10-09 10:44:42 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:42.555987 | orchestrator | 2025-10-09 10:44:42 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:42.556675 | orchestrator | 2025-10-09 10:44:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:45.607405 | orchestrator | 2025-10-09 10:44:45 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:45.607721 | orchestrator | 2025-10-09 10:44:45 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:45.610103 | orchestrator | 2025-10-09 10:44:45 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:45.611080 | orchestrator | 2025-10-09 10:44:45 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:45.611250 | orchestrator | 2025-10-09 10:44:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:48.644970 | orchestrator | 2025-10-09 10:44:48 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:48.645816 | orchestrator | 2025-10-09 10:44:48 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:48.646991 | orchestrator | 2025-10-09 10:44:48 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:48.647974 | orchestrator | 2025-10-09 10:44:48 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:48.648236 | orchestrator | 2025-10-09 10:44:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:51.682213 | orchestrator | 2025-10-09 10:44:51 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:51.682689 | orchestrator | 2025-10-09 10:44:51 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:51.683699 | orchestrator | 2025-10-09 10:44:51 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:51.684812 | orchestrator | 2025-10-09 10:44:51 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:51.684844 | orchestrator | 2025-10-09 10:44:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:54.713734 | orchestrator | 2025-10-09 10:44:54 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:54.714232 | orchestrator | 2025-10-09 10:44:54 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:54.715254 | orchestrator | 2025-10-09 10:44:54 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:54.716094 | orchestrator | 2025-10-09 10:44:54 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:54.716112 | orchestrator | 2025-10-09 10:44:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:57.766196 | orchestrator | 2025-10-09 10:44:57 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:44:57.766673 | orchestrator | 2025-10-09 10:44:57 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:44:57.767716 | orchestrator | 2025-10-09 10:44:57 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:44:57.768993 | orchestrator | 2025-10-09 10:44:57 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:44:57.769035 | orchestrator | 2025-10-09 10:44:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:00.814359 | orchestrator | 2025-10-09 10:45:00 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:00.814580 | orchestrator | 2025-10-09 10:45:00 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:00.816812 | orchestrator | 2025-10-09 10:45:00 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:00.818440 | orchestrator | 2025-10-09 10:45:00 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:00.818721 | orchestrator | 2025-10-09 10:45:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:03.867577 | orchestrator | 2025-10-09 10:45:03 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:03.868055 | orchestrator | 2025-10-09 10:45:03 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:03.869256 | orchestrator | 2025-10-09 10:45:03 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:03.870339 | orchestrator | 2025-10-09 10:45:03 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:03.870590 | orchestrator | 2025-10-09 10:45:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:06.932738 | orchestrator | 2025-10-09 10:45:06 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:06.935051 | orchestrator | 2025-10-09 10:45:06 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:06.938510 | orchestrator | 2025-10-09 10:45:06 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:06.941498 | orchestrator | 2025-10-09 10:45:06 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:06.941565 | orchestrator | 2025-10-09 10:45:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:09.991921 | orchestrator | 2025-10-09 10:45:09 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:09.994399 | orchestrator | 2025-10-09 10:45:09 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:09.996313 | orchestrator | 2025-10-09 10:45:09 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:09.998696 | orchestrator | 2025-10-09 10:45:09 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:10.000258 | orchestrator | 2025-10-09 10:45:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:13.045737 | orchestrator | 2025-10-09 10:45:13 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:13.047060 | orchestrator | 2025-10-09 10:45:13 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:13.048269 | orchestrator | 2025-10-09 10:45:13 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:13.049910 | orchestrator | 2025-10-09 10:45:13 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:13.049956 | orchestrator | 2025-10-09 10:45:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:16.085772 | orchestrator | 2025-10-09 10:45:16 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:16.087490 | orchestrator | 2025-10-09 10:45:16 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:16.090392 | orchestrator | 2025-10-09 10:45:16 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:16.092424 | orchestrator | 2025-10-09 10:45:16 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:16.092505 | orchestrator | 2025-10-09 10:45:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:19.129796 | orchestrator | 2025-10-09 10:45:19 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:19.130739 | orchestrator | 2025-10-09 10:45:19 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:19.132275 | orchestrator | 2025-10-09 10:45:19 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:19.134116 | orchestrator | 2025-10-09 10:45:19 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:19.134227 | orchestrator | 2025-10-09 10:45:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:22.194306 | orchestrator | 2025-10-09 10:45:22 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:22.195620 | orchestrator | 2025-10-09 10:45:22 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:22.197254 | orchestrator | 2025-10-09 10:45:22 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:22.199209 | orchestrator | 2025-10-09 10:45:22 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:22.199252 | orchestrator | 2025-10-09 10:45:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:25.242828 | orchestrator | 2025-10-09 10:45:25 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:25.244278 | orchestrator | 2025-10-09 10:45:25 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:25.245266 | orchestrator | 2025-10-09 10:45:25 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:25.246406 | orchestrator | 2025-10-09 10:45:25 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:25.246508 | orchestrator | 2025-10-09 10:45:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:28.296621 | orchestrator | 2025-10-09 10:45:28 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:28.298499 | orchestrator | 2025-10-09 10:45:28 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:28.300201 | orchestrator | 2025-10-09 10:45:28 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:28.303773 | orchestrator | 2025-10-09 10:45:28 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:28.303919 | orchestrator | 2025-10-09 10:45:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:31.355854 | orchestrator | 2025-10-09 10:45:31 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state STARTED 2025-10-09 10:45:31.359111 | orchestrator | 2025-10-09 10:45:31 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:31.360849 | orchestrator | 2025-10-09 10:45:31 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:31.362759 | orchestrator | 2025-10-09 10:45:31 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:31.362931 | orchestrator | 2025-10-09 10:45:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:34.413844 | orchestrator | 2025-10-09 10:45:34 | INFO  | Task 80e24112-ecb3-4595-9b7c-6526a678fe00 is in state SUCCESS 2025-10-09 10:45:34.414856 | orchestrator | 2025-10-09 10:45:34.414923 | orchestrator | 2025-10-09 10:45:34.414938 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:45:34.415007 | orchestrator | 2025-10-09 10:45:34.415056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:45:34.415069 | orchestrator | Thursday 09 October 2025 10:43:43 +0000 (0:00:00.171) 0:00:00.171 ****** 2025-10-09 10:45:34.415080 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:34.415092 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:45:34.415103 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:45:34.415113 | orchestrator | 2025-10-09 10:45:34.415157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:45:34.415168 | orchestrator | Thursday 09 October 2025 10:43:43 +0000 (0:00:00.269) 0:00:00.440 ****** 2025-10-09 10:45:34.415180 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-10-09 10:45:34.415191 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-10-09 10:45:34.415202 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-10-09 10:45:34.415213 | orchestrator | 2025-10-09 10:45:34.415223 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-10-09 10:45:34.415234 | orchestrator | 2025-10-09 10:45:34.415245 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-10-09 10:45:34.415256 | orchestrator | Thursday 09 October 2025 10:43:44 +0000 (0:00:00.562) 0:00:01.003 ****** 2025-10-09 10:45:34.415267 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:34.415277 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:45:34.415288 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:45:34.415299 | orchestrator | 2025-10-09 10:45:34.415310 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:45:34.415321 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:45:34.415334 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:45:34.415345 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:45:34.415356 | orchestrator | 2025-10-09 10:45:34.415367 | orchestrator | 2025-10-09 10:45:34.415418 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:45:34.415431 | orchestrator | Thursday 09 October 2025 10:43:44 +0000 (0:00:00.813) 0:00:01.816 ****** 2025-10-09 10:45:34.415443 | orchestrator | =============================================================================== 2025-10-09 10:45:34.415455 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.81s 2025-10-09 10:45:34.415468 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-10-09 10:45:34.415480 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-10-09 10:45:34.415493 | orchestrator | 2025-10-09 10:45:34.415505 | orchestrator | 2025-10-09 10:45:34.415517 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:45:34.415530 | orchestrator | 2025-10-09 10:45:34.415542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:45:34.415554 | orchestrator | Thursday 09 October 2025 10:43:31 +0000 (0:00:00.325) 0:00:00.325 ****** 2025-10-09 10:45:34.415566 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:34.415578 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:45:34.415617 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:45:34.415630 | orchestrator | 2025-10-09 10:45:34.415643 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:45:34.415655 | orchestrator | Thursday 09 October 2025 10:43:31 +0000 (0:00:00.341) 0:00:00.666 ****** 2025-10-09 10:45:34.415667 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-10-09 10:45:34.415679 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-10-09 10:45:34.415692 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-10-09 10:45:34.415704 | orchestrator | 2025-10-09 10:45:34.415716 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-10-09 10:45:34.415728 | orchestrator | 2025-10-09 10:45:34.415740 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-09 10:45:34.415752 | orchestrator | Thursday 09 October 2025 10:43:32 +0000 (0:00:00.460) 0:00:01.127 ****** 2025-10-09 10:45:34.415764 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:34.415777 | orchestrator | 2025-10-09 10:45:34.415790 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-10-09 10:45:34.415801 | orchestrator | Thursday 09 October 2025 10:43:32 +0000 (0:00:00.609) 0:00:01.737 ****** 2025-10-09 10:45:34.415812 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-10-09 10:45:34.415823 | orchestrator | 2025-10-09 10:45:34.415833 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-10-09 10:45:34.415844 | orchestrator | Thursday 09 October 2025 10:43:36 +0000 (0:00:03.808) 0:00:05.545 ****** 2025-10-09 10:45:34.415855 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-10-09 10:45:34.415866 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-10-09 10:45:34.415877 | orchestrator | 2025-10-09 10:45:34.415887 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-10-09 10:45:34.415899 | orchestrator | Thursday 09 October 2025 10:43:43 +0000 (0:00:07.180) 0:00:12.725 ****** 2025-10-09 10:45:34.415910 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:45:34.415921 | orchestrator | 2025-10-09 10:45:34.415931 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-10-09 10:45:34.415942 | orchestrator | Thursday 09 October 2025 10:43:47 +0000 (0:00:03.649) 0:00:16.374 ****** 2025-10-09 10:45:34.415967 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:45:34.415978 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-10-09 10:45:34.415989 | orchestrator | 2025-10-09 10:45:34.416000 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-10-09 10:45:34.416016 | orchestrator | Thursday 09 October 2025 10:43:51 +0000 (0:00:04.339) 0:00:20.714 ****** 2025-10-09 10:45:34.416028 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:45:34.416038 | orchestrator | 2025-10-09 10:45:34.416049 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-10-09 10:45:34.416060 | orchestrator | Thursday 09 October 2025 10:43:55 +0000 (0:00:03.401) 0:00:24.115 ****** 2025-10-09 10:45:34.416070 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-10-09 10:45:34.416081 | orchestrator | 2025-10-09 10:45:34.416091 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-10-09 10:45:34.416102 | orchestrator | Thursday 09 October 2025 10:43:58 +0000 (0:00:03.753) 0:00:27.868 ****** 2025-10-09 10:45:34.416113 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:34.416142 | orchestrator | 2025-10-09 10:45:34.416153 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-10-09 10:45:34.416164 | orchestrator | Thursday 09 October 2025 10:44:02 +0000 (0:00:03.267) 0:00:31.136 ****** 2025-10-09 10:45:34.416174 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:34.416185 | orchestrator | 2025-10-09 10:45:34.416210 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-10-09 10:45:34.416221 | orchestrator | Thursday 09 October 2025 10:44:06 +0000 (0:00:04.283) 0:00:35.419 ****** 2025-10-09 10:45:34.416232 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:34.416242 | orchestrator | 2025-10-09 10:45:34.416253 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-10-09 10:45:34.416264 | orchestrator | Thursday 09 October 2025 10:44:10 +0000 (0:00:03.855) 0:00:39.275 ****** 2025-10-09 10:45:34.416278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416391 | orchestrator | 2025-10-09 10:45:34.416409 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-10-09 10:45:34.416427 | orchestrator | Thursday 09 October 2025 10:44:11 +0000 (0:00:01.460) 0:00:40.736 ****** 2025-10-09 10:45:34.416446 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:34.416464 | orchestrator | 2025-10-09 10:45:34.416482 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-10-09 10:45:34.416499 | orchestrator | Thursday 09 October 2025 10:44:11 +0000 (0:00:00.135) 0:00:40.872 ****** 2025-10-09 10:45:34.416517 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:34.416535 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:34.416554 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:34.416567 | orchestrator | 2025-10-09 10:45:34.416578 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-10-09 10:45:34.416588 | orchestrator | Thursday 09 October 2025 10:44:12 +0000 (0:00:00.470) 0:00:41.343 ****** 2025-10-09 10:45:34.416599 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:45:34.416610 | orchestrator | 2025-10-09 10:45:34.416620 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-10-09 10:45:34.416631 | orchestrator | Thursday 09 October 2025 10:44:13 +0000 (0:00:00.808) 0:00:42.151 ****** 2025-10-09 10:45:34.416643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416736 | orchestrator | 2025-10-09 10:45:34.416748 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-10-09 10:45:34.416770 | orchestrator | Thursday 09 October 2025 10:44:15 +0000 (0:00:02.506) 0:00:44.658 ****** 2025-10-09 10:45:34.416781 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:34.416792 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:45:34.416803 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:45:34.416813 | orchestrator | 2025-10-09 10:45:34.416824 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-09 10:45:34.416840 | orchestrator | Thursday 09 October 2025 10:44:16 +0000 (0:00:00.365) 0:00:45.023 ****** 2025-10-09 10:45:34.416851 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:34.416862 | orchestrator | 2025-10-09 10:45:34.416873 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-10-09 10:45:34.416884 | orchestrator | Thursday 09 October 2025 10:44:16 +0000 (0:00:00.763) 0:00:45.786 ****** 2025-10-09 10:45:34.416895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.416930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.416982 | orchestrator | 2025-10-09 10:45:34.416993 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-10-09 10:45:34.417004 | orchestrator | Thursday 09 October 2025 10:44:19 +0000 (0:00:02.716) 0:00:48.503 ****** 2025-10-09 10:45:34.417016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.417028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.417039 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:34.417063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.417079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.417091 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:34.417103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.417114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.417153 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:34.417172 | orchestrator | 2025-10-09 10:45:34.417188 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-10-09 10:45:34.417199 | orchestrator | Thursday 09 October 2025 10:44:21 +0000 (0:00:01.652) 0:00:50.155 ****** 2025-10-09 10:45:34.417210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.417250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.417263 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:34.417274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.417285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.417296 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:34.417308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.417325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.417337 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:34.417347 | orchestrator | 2025-10-09 10:45:34.417358 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-10-09 10:45:34.417369 | orchestrator | Thursday 09 October 2025 10:44:24 +0000 (0:00:02.893) 0:00:53.049 ****** 2025-10-09 10:45:34.417739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.417767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.417779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.417800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.417820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.417836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.417848 | orchestrator | 2025-10-09 10:45:34.417859 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-10-09 10:45:34.417870 | orchestrator | Thursday 09 October 2025 10:44:27 +0000 (0:00:03.329) 0:00:56.378 ****** 2025-10-09 10:45:34.417881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.417893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.417911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.417933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.417945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.417956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.417968 | orchestrator | 2025-10-09 10:45:34.417979 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-10-09 10:45:34.417990 | orchestrator | Thursday 09 October 2025 10:44:33 +0000 (0:00:05.672) 0:01:02.051 ****** 2025-10-09 10:45:34.418001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.418202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.418224 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:34.418252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.418265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.418276 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:34.418288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:45:34.418307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:34.418317 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:34.418327 | orchestrator | 2025-10-09 10:45:34.418338 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-10-09 10:45:34.418349 | orchestrator | Thursday 09 October 2025 10:44:33 +0000 (0:00:00.641) 0:01:02.692 ****** 2025-10-09 10:45:34.418365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.418382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.418394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:45:34.418411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.418423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.418438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:34.418450 | orchestrator | 2025-10-09 10:45:34.418460 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-09 10:45:34.418471 | orchestrator | Thursday 09 October 2025 10:44:36 +0000 (0:00:02.491) 0:01:05.183 ****** 2025-10-09 10:45:34.418486 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:34.418498 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:34.418509 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:34.418519 | orchestrator | 2025-10-09 10:45:34.418530 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-10-09 10:45:34.418541 | orchestrator | Thursday 09 October 2025 10:44:36 +0000 (0:00:00.316) 0:01:05.500 ****** 2025-10-09 10:45:34.418551 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:34.418562 | orchestrator | 2025-10-09 10:45:34.418573 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-10-09 10:45:34.418584 | orchestrator | Thursday 09 October 2025 10:44:39 +0000 (0:00:02.507) 0:01:08.007 ****** 2025-10-09 10:45:34.418595 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:34.418605 | orchestrator | 2025-10-09 10:45:34.418616 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-10-09 10:45:34.418627 | orchestrator | Thursday 09 October 2025 10:44:41 +0000 (0:00:02.604) 0:01:10.612 ****** 2025-10-09 10:45:34.418637 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:34.418649 | orchestrator | 2025-10-09 10:45:34.418659 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-09 10:45:34.418670 | orchestrator | Thursday 09 October 2025 10:45:02 +0000 (0:00:20.947) 0:01:31.560 ****** 2025-10-09 10:45:34.418696 | orchestrator | 2025-10-09 10:45:34.418712 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-09 10:45:34.418727 | orchestrator | Thursday 09 October 2025 10:45:02 +0000 (0:00:00.077) 0:01:31.638 ****** 2025-10-09 10:45:34.418743 | orchestrator | 2025-10-09 10:45:34.418760 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-09 10:45:34.418775 | orchestrator | Thursday 09 October 2025 10:45:02 +0000 (0:00:00.086) 0:01:31.724 ****** 2025-10-09 10:45:34.418792 | orchestrator | 2025-10-09 10:45:34.418808 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-10-09 10:45:34.418824 | orchestrator | Thursday 09 October 2025 10:45:02 +0000 (0:00:00.108) 0:01:31.832 ****** 2025-10-09 10:45:34.418834 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:34.418844 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:34.418853 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:34.418863 | orchestrator | 2025-10-09 10:45:34.418872 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-10-09 10:45:34.418882 | orchestrator | Thursday 09 October 2025 10:45:16 +0000 (0:00:14.006) 0:01:45.839 ****** 2025-10-09 10:45:34.418891 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:34.418901 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:34.418910 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:34.418920 | orchestrator | 2025-10-09 10:45:34.418929 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:45:34.418939 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:45:34.418951 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:45:34.418961 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:45:34.418970 | orchestrator | 2025-10-09 10:45:34.418998 | orchestrator | 2025-10-09 10:45:34.419008 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:45:34.419018 | orchestrator | Thursday 09 October 2025 10:45:33 +0000 (0:00:16.795) 0:02:02.634 ****** 2025-10-09 10:45:34.419027 | orchestrator | =============================================================================== 2025-10-09 10:45:34.419037 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 20.95s 2025-10-09 10:45:34.419046 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.80s 2025-10-09 10:45:34.419055 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.01s 2025-10-09 10:45:34.419065 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.18s 2025-10-09 10:45:34.419074 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.67s 2025-10-09 10:45:34.419083 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.34s 2025-10-09 10:45:34.419093 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.28s 2025-10-09 10:45:34.419102 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.86s 2025-10-09 10:45:34.419111 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.81s 2025-10-09 10:45:34.419145 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.75s 2025-10-09 10:45:34.419156 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.65s 2025-10-09 10:45:34.419166 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.40s 2025-10-09 10:45:34.419175 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.33s 2025-10-09 10:45:34.419185 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.27s 2025-10-09 10:45:34.419206 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.89s 2025-10-09 10:45:34.419216 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.72s 2025-10-09 10:45:34.419232 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.60s 2025-10-09 10:45:34.419242 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.51s 2025-10-09 10:45:34.419251 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.51s 2025-10-09 10:45:34.419266 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.49s 2025-10-09 10:45:34.419360 | orchestrator | 2025-10-09 10:45:34 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:34.419374 | orchestrator | 2025-10-09 10:45:34 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:34.421493 | orchestrator | 2025-10-09 10:45:34 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:34.421524 | orchestrator | 2025-10-09 10:45:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:37.466829 | orchestrator | 2025-10-09 10:45:37 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:37.468823 | orchestrator | 2025-10-09 10:45:37 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:37.470465 | orchestrator | 2025-10-09 10:45:37 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:37.470493 | orchestrator | 2025-10-09 10:45:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:40.514951 | orchestrator | 2025-10-09 10:45:40 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state STARTED 2025-10-09 10:45:40.517785 | orchestrator | 2025-10-09 10:45:40 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:40.519687 | orchestrator | 2025-10-09 10:45:40 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:40.519718 | orchestrator | 2025-10-09 10:45:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:43.563631 | orchestrator | 2025-10-09 10:45:43 | INFO  | Task 6c9274b8-38ed-4de6-9259-a62b99a74d70 is in state SUCCESS 2025-10-09 10:45:43.565610 | orchestrator | 2025-10-09 10:45:43.565644 | orchestrator | 2025-10-09 10:45:43.565655 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:45:43.565664 | orchestrator | 2025-10-09 10:45:43.565674 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-10-09 10:45:43.565683 | orchestrator | Thursday 09 October 2025 10:35:49 +0000 (0:00:00.232) 0:00:00.232 ****** 2025-10-09 10:45:43.565692 | orchestrator | changed: [testbed-manager] 2025-10-09 10:45:43.565702 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.565711 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:43.565720 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:43.565728 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.565737 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.565745 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.565754 | orchestrator | 2025-10-09 10:45:43.565763 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:45:43.565772 | orchestrator | Thursday 09 October 2025 10:35:50 +0000 (0:00:00.906) 0:00:01.138 ****** 2025-10-09 10:45:43.565780 | orchestrator | changed: [testbed-manager] 2025-10-09 10:45:43.565789 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.565797 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:43.565806 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:43.565815 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.565823 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.565832 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.565863 | orchestrator | 2025-10-09 10:45:43.565872 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:45:43.565881 | orchestrator | Thursday 09 October 2025 10:35:51 +0000 (0:00:00.586) 0:00:01.724 ****** 2025-10-09 10:45:43.565889 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-10-09 10:45:43.565898 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-10-09 10:45:43.565907 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-10-09 10:45:43.565916 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-10-09 10:45:43.565925 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-10-09 10:45:43.565933 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-10-09 10:45:43.565941 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-10-09 10:45:43.565950 | orchestrator | 2025-10-09 10:45:43.565959 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-10-09 10:45:43.565967 | orchestrator | 2025-10-09 10:45:43.565976 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-10-09 10:45:43.565984 | orchestrator | Thursday 09 October 2025 10:35:52 +0000 (0:00:00.844) 0:00:02.569 ****** 2025-10-09 10:45:43.565993 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:43.566001 | orchestrator | 2025-10-09 10:45:43.566010 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-10-09 10:45:43.566089 | orchestrator | Thursday 09 October 2025 10:35:53 +0000 (0:00:00.772) 0:00:03.341 ****** 2025-10-09 10:45:43.566101 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-10-09 10:45:43.566110 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-10-09 10:45:43.566143 | orchestrator | 2025-10-09 10:45:43.566153 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-10-09 10:45:43.566162 | orchestrator | Thursday 09 October 2025 10:35:57 +0000 (0:00:04.402) 0:00:07.744 ****** 2025-10-09 10:45:43.566170 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:45:43.566179 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:45:43.566187 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.566196 | orchestrator | 2025-10-09 10:45:43.566205 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-10-09 10:45:43.566227 | orchestrator | Thursday 09 October 2025 10:36:02 +0000 (0:00:04.649) 0:00:12.394 ****** 2025-10-09 10:45:43.566238 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.566247 | orchestrator | 2025-10-09 10:45:43.566258 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-10-09 10:45:43.566267 | orchestrator | Thursday 09 October 2025 10:36:03 +0000 (0:00:01.085) 0:00:13.479 ****** 2025-10-09 10:45:43.566277 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.566287 | orchestrator | 2025-10-09 10:45:43.566296 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-10-09 10:45:43.566306 | orchestrator | Thursday 09 October 2025 10:36:05 +0000 (0:00:02.456) 0:00:15.936 ****** 2025-10-09 10:45:43.566315 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.566325 | orchestrator | 2025-10-09 10:45:43.566335 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-09 10:45:43.566345 | orchestrator | Thursday 09 October 2025 10:36:10 +0000 (0:00:05.302) 0:00:21.238 ****** 2025-10-09 10:45:43.566354 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.566364 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.566373 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.566382 | orchestrator | 2025-10-09 10:45:43.566392 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-10-09 10:45:43.566401 | orchestrator | Thursday 09 October 2025 10:36:11 +0000 (0:00:00.387) 0:00:21.625 ****** 2025-10-09 10:45:43.566510 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:43.566524 | orchestrator | 2025-10-09 10:45:43.566534 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-10-09 10:45:43.566554 | orchestrator | Thursday 09 October 2025 10:36:42 +0000 (0:00:31.373) 0:00:52.999 ****** 2025-10-09 10:45:43.566564 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.566574 | orchestrator | 2025-10-09 10:45:43.566583 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-09 10:45:43.566591 | orchestrator | Thursday 09 October 2025 10:37:00 +0000 (0:00:17.694) 0:01:10.694 ****** 2025-10-09 10:45:43.566600 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:43.566608 | orchestrator | 2025-10-09 10:45:43.566617 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-09 10:45:43.566625 | orchestrator | Thursday 09 October 2025 10:37:14 +0000 (0:00:14.045) 0:01:24.739 ****** 2025-10-09 10:45:43.566646 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:43.566656 | orchestrator | 2025-10-09 10:45:43.566664 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-10-09 10:45:43.566673 | orchestrator | Thursday 09 October 2025 10:37:15 +0000 (0:00:00.976) 0:01:25.715 ****** 2025-10-09 10:45:43.566681 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.566690 | orchestrator | 2025-10-09 10:45:43.566698 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-09 10:45:43.566707 | orchestrator | Thursday 09 October 2025 10:37:15 +0000 (0:00:00.455) 0:01:26.171 ****** 2025-10-09 10:45:43.566716 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:43.566725 | orchestrator | 2025-10-09 10:45:43.566733 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-10-09 10:45:43.566741 | orchestrator | Thursday 09 October 2025 10:37:16 +0000 (0:00:00.545) 0:01:26.716 ****** 2025-10-09 10:45:43.566750 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:43.566758 | orchestrator | 2025-10-09 10:45:43.566767 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-10-09 10:45:43.566775 | orchestrator | Thursday 09 October 2025 10:37:36 +0000 (0:00:19.638) 0:01:46.354 ****** 2025-10-09 10:45:43.566784 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.566792 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.566801 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.566809 | orchestrator | 2025-10-09 10:45:43.566818 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-10-09 10:45:43.566826 | orchestrator | 2025-10-09 10:45:43.566835 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-10-09 10:45:43.566843 | orchestrator | Thursday 09 October 2025 10:37:36 +0000 (0:00:00.334) 0:01:46.688 ****** 2025-10-09 10:45:43.566852 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:43.566860 | orchestrator | 2025-10-09 10:45:43.566868 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-10-09 10:45:43.566877 | orchestrator | Thursday 09 October 2025 10:37:36 +0000 (0:00:00.612) 0:01:47.301 ****** 2025-10-09 10:45:43.566885 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.566894 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.566902 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.566911 | orchestrator | 2025-10-09 10:45:43.566920 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-10-09 10:45:43.566928 | orchestrator | Thursday 09 October 2025 10:37:38 +0000 (0:00:01.884) 0:01:49.185 ****** 2025-10-09 10:45:43.566937 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.566946 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.566954 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.566963 | orchestrator | 2025-10-09 10:45:43.566971 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-10-09 10:45:43.566980 | orchestrator | Thursday 09 October 2025 10:37:40 +0000 (0:00:01.932) 0:01:51.117 ****** 2025-10-09 10:45:43.566988 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.566997 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567011 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567020 | orchestrator | 2025-10-09 10:45:43.567028 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-10-09 10:45:43.567037 | orchestrator | Thursday 09 October 2025 10:37:41 +0000 (0:00:00.528) 0:01:51.646 ****** 2025-10-09 10:45:43.567045 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-09 10:45:43.567054 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567062 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-09 10:45:43.567071 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567079 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-10-09 10:45:43.567093 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-10-09 10:45:43.567102 | orchestrator | 2025-10-09 10:45:43.567111 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-10-09 10:45:43.567136 | orchestrator | Thursday 09 October 2025 10:37:49 +0000 (0:00:08.590) 0:02:00.236 ****** 2025-10-09 10:45:43.567145 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.567154 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567162 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567183 | orchestrator | 2025-10-09 10:45:43.567192 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-10-09 10:45:43.567200 | orchestrator | Thursday 09 October 2025 10:37:50 +0000 (0:00:00.385) 0:02:00.622 ****** 2025-10-09 10:45:43.567209 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-09 10:45:43.567217 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.567226 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-09 10:45:43.567234 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567243 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-09 10:45:43.567251 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567260 | orchestrator | 2025-10-09 10:45:43.567268 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-10-09 10:45:43.567277 | orchestrator | Thursday 09 October 2025 10:37:51 +0000 (0:00:00.740) 0:02:01.362 ****** 2025-10-09 10:45:43.567285 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567294 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567302 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.567311 | orchestrator | 2025-10-09 10:45:43.567319 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-10-09 10:45:43.567328 | orchestrator | Thursday 09 October 2025 10:37:52 +0000 (0:00:01.153) 0:02:02.516 ****** 2025-10-09 10:45:43.567336 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567344 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567353 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.567361 | orchestrator | 2025-10-09 10:45:43.567370 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-10-09 10:45:43.567378 | orchestrator | Thursday 09 October 2025 10:37:53 +0000 (0:00:01.140) 0:02:03.656 ****** 2025-10-09 10:45:43.567387 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567395 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567410 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.567419 | orchestrator | 2025-10-09 10:45:43.567428 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-10-09 10:45:43.567436 | orchestrator | Thursday 09 October 2025 10:37:55 +0000 (0:00:02.437) 0:02:06.094 ****** 2025-10-09 10:45:43.567444 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567453 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567462 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:43.567470 | orchestrator | 2025-10-09 10:45:43.567479 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-09 10:45:43.567487 | orchestrator | Thursday 09 October 2025 10:38:17 +0000 (0:00:21.335) 0:02:27.429 ****** 2025-10-09 10:45:43.567524 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567540 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567548 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:43.567557 | orchestrator | 2025-10-09 10:45:43.567565 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-09 10:45:43.567614 | orchestrator | Thursday 09 October 2025 10:38:32 +0000 (0:00:15.081) 0:02:42.510 ****** 2025-10-09 10:45:43.567625 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:43.567633 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567671 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567681 | orchestrator | 2025-10-09 10:45:43.567689 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-10-09 10:45:43.567698 | orchestrator | Thursday 09 October 2025 10:38:33 +0000 (0:00:00.977) 0:02:43.488 ****** 2025-10-09 10:45:43.567706 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567715 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567724 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.567732 | orchestrator | 2025-10-09 10:45:43.567741 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-10-09 10:45:43.567749 | orchestrator | Thursday 09 October 2025 10:38:45 +0000 (0:00:12.674) 0:02:56.162 ****** 2025-10-09 10:45:43.567758 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.567767 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567775 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567784 | orchestrator | 2025-10-09 10:45:43.567792 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-10-09 10:45:43.567801 | orchestrator | Thursday 09 October 2025 10:38:46 +0000 (0:00:00.956) 0:02:57.119 ****** 2025-10-09 10:45:43.567809 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.567818 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.567826 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.567835 | orchestrator | 2025-10-09 10:45:43.567844 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-10-09 10:45:43.567852 | orchestrator | 2025-10-09 10:45:43.567861 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-09 10:45:43.567869 | orchestrator | Thursday 09 October 2025 10:38:47 +0000 (0:00:00.435) 0:02:57.554 ****** 2025-10-09 10:45:43.567878 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:43.567887 | orchestrator | 2025-10-09 10:45:43.567896 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-10-09 10:45:43.567904 | orchestrator | Thursday 09 October 2025 10:38:47 +0000 (0:00:00.591) 0:02:58.146 ****** 2025-10-09 10:45:43.567913 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-10-09 10:45:43.567921 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-10-09 10:45:43.567930 | orchestrator | 2025-10-09 10:45:43.567939 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-10-09 10:45:43.567947 | orchestrator | Thursday 09 October 2025 10:38:51 +0000 (0:00:03.210) 0:03:01.356 ****** 2025-10-09 10:45:43.567960 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-10-09 10:45:43.567970 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-10-09 10:45:43.567979 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-10-09 10:45:43.567987 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-10-09 10:45:43.567996 | orchestrator | 2025-10-09 10:45:43.568004 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-10-09 10:45:43.568013 | orchestrator | Thursday 09 October 2025 10:38:57 +0000 (0:00:06.746) 0:03:08.103 ****** 2025-10-09 10:45:43.568022 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:45:43.568030 | orchestrator | 2025-10-09 10:45:43.568044 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-10-09 10:45:43.568053 | orchestrator | Thursday 09 October 2025 10:39:01 +0000 (0:00:03.447) 0:03:11.551 ****** 2025-10-09 10:45:43.568061 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:45:43.568070 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-10-09 10:45:43.568078 | orchestrator | 2025-10-09 10:45:43.568087 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-10-09 10:45:43.568095 | orchestrator | Thursday 09 October 2025 10:39:05 +0000 (0:00:04.032) 0:03:15.583 ****** 2025-10-09 10:45:43.568104 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:45:43.568112 | orchestrator | 2025-10-09 10:45:43.568135 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-10-09 10:45:43.568144 | orchestrator | Thursday 09 October 2025 10:39:08 +0000 (0:00:03.529) 0:03:19.113 ****** 2025-10-09 10:45:43.568152 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-10-09 10:45:43.568160 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-10-09 10:45:43.568169 | orchestrator | 2025-10-09 10:45:43.568178 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-10-09 10:45:43.568192 | orchestrator | Thursday 09 October 2025 10:39:17 +0000 (0:00:08.303) 0:03:27.416 ****** 2025-10-09 10:45:43.568207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.568226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.568238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.568264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.568276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.568285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.568295 | orchestrator | 2025-10-09 10:45:43.568304 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-10-09 10:45:43.568312 | orchestrator | Thursday 09 October 2025 10:39:18 +0000 (0:00:01.509) 0:03:28.926 ****** 2025-10-09 10:45:43.568334 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.568343 | orchestrator | 2025-10-09 10:45:43.568352 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-10-09 10:45:43.568360 | orchestrator | Thursday 09 October 2025 10:39:18 +0000 (0:00:00.144) 0:03:29.071 ****** 2025-10-09 10:45:43.568369 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.568378 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.568386 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.568435 | orchestrator | 2025-10-09 10:45:43.568444 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-10-09 10:45:43.568452 | orchestrator | Thursday 09 October 2025 10:39:19 +0000 (0:00:00.326) 0:03:29.397 ****** 2025-10-09 10:45:43.568461 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:45:43.568475 | orchestrator | 2025-10-09 10:45:43.568484 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-10-09 10:45:43.568493 | orchestrator | Thursday 09 October 2025 10:39:20 +0000 (0:00:01.706) 0:03:31.104 ****** 2025-10-09 10:45:43.568501 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.568510 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.568518 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.568527 | orchestrator | 2025-10-09 10:45:43.568535 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-09 10:45:43.568548 | orchestrator | Thursday 09 October 2025 10:39:21 +0000 (0:00:00.324) 0:03:31.429 ****** 2025-10-09 10:45:43.568556 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:43.568565 | orchestrator | 2025-10-09 10:45:43.568574 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-10-09 10:45:43.568582 | orchestrator | Thursday 09 October 2025 10:39:22 +0000 (0:00:01.022) 0:03:32.452 ****** 2025-10-09 10:45:43.568721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.568733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.568748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.568764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.568773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.568790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.568799 | orchestrator | 2025-10-09 10:45:43.568808 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-10-09 10:45:43.568817 | orchestrator | Thursday 09 October 2025 10:39:25 +0000 (0:00:03.121) 0:03:35.573 ****** 2025-10-09 10:45:43.568826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.568846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.568855 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.568868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.568883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.568892 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.568902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.568916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.568925 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.568934 | orchestrator | 2025-10-09 10:45:43.568943 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-10-09 10:45:43.568951 | orchestrator | Thursday 09 October 2025 10:39:26 +0000 (0:00:00.908) 0:03:36.482 ****** 2025-10-09 10:45:43.568964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.568974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.568983 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.570239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.570347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.570365 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.570392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.570406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.570417 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.570429 | orchestrator | 2025-10-09 10:45:43.570440 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-10-09 10:45:43.570452 | orchestrator | Thursday 09 October 2025 10:39:26 +0000 (0:00:00.823) 0:03:37.306 ****** 2025-10-09 10:45:43.570493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.570516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.570535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.570558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.570571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.570583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.570602 | orchestrator | 2025-10-09 10:45:43.570613 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-10-09 10:45:43.570624 | orchestrator | Thursday 09 October 2025 10:39:29 +0000 (0:00:02.955) 0:03:40.261 ****** 2025-10-09 10:45:43.570640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.570654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.570674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.570694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.570706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.570722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.570733 | orchestrator | 2025-10-09 10:45:43.570745 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-10-09 10:45:43.570756 | orchestrator | Thursday 09 October 2025 10:39:39 +0000 (0:00:09.652) 0:03:49.913 ****** 2025-10-09 10:45:43.570774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.570787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.570804 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.570816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.570833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.570845 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.570857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:45:43.570877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.570894 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.570906 | orchestrator | 2025-10-09 10:45:43.570917 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-10-09 10:45:43.570928 | orchestrator | Thursday 09 October 2025 10:39:41 +0000 (0:00:01.575) 0:03:51.489 ****** 2025-10-09 10:45:43.570939 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.570950 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:43.570960 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:43.570971 | orchestrator | 2025-10-09 10:45:43.570982 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-10-09 10:45:43.570993 | orchestrator | Thursday 09 October 2025 10:39:42 +0000 (0:00:01.635) 0:03:53.125 ****** 2025-10-09 10:45:43.571004 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.571015 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.571025 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.571036 | orchestrator | 2025-10-09 10:45:43.571047 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-10-09 10:45:43.571058 | orchestrator | Thursday 09 October 2025 10:39:43 +0000 (0:00:00.633) 0:03:53.758 ****** 2025-10-09 10:45:43.571069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.571086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.571113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.1.1.20250928', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:43.571156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.571168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.571184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.571196 | orchestrator | 2025-10-09 10:45:43.571207 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-09 10:45:43.571218 | orchestrator | Thursday 09 October 2025 10:39:46 +0000 (0:00:02.901) 0:03:56.660 ****** 2025-10-09 10:45:43.571229 | orchestrator | 2025-10-09 10:45:43.571239 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-09 10:45:43.571250 | orchestrator | Thursday 09 October 2025 10:39:46 +0000 (0:00:00.380) 0:03:57.041 ****** 2025-10-09 10:45:43.571261 | orchestrator | 2025-10-09 10:45:43.571272 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-09 10:45:43.571283 | orchestrator | Thursday 09 October 2025 10:39:46 +0000 (0:00:00.275) 0:03:57.316 ****** 2025-10-09 10:45:43.571294 | orchestrator | 2025-10-09 10:45:43.571304 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-10-09 10:45:43.571315 | orchestrator | Thursday 09 October 2025 10:39:47 +0000 (0:00:00.243) 0:03:57.559 ****** 2025-10-09 10:45:43.571339 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.571349 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:43.571360 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:43.571371 | orchestrator | 2025-10-09 10:45:43.571382 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-10-09 10:45:43.571393 | orchestrator | Thursday 09 October 2025 10:40:04 +0000 (0:00:17.540) 0:04:15.100 ****** 2025-10-09 10:45:43.571404 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:43.571415 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.571426 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:43.571436 | orchestrator | 2025-10-09 10:45:43.571447 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-10-09 10:45:43.571458 | orchestrator | 2025-10-09 10:45:43.571468 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:45:43.571479 | orchestrator | Thursday 09 October 2025 10:40:18 +0000 (0:00:14.219) 0:04:29.319 ****** 2025-10-09 10:45:43.571490 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:43.571502 | orchestrator | 2025-10-09 10:45:43.571518 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:45:43.571530 | orchestrator | Thursday 09 October 2025 10:40:21 +0000 (0:00:02.205) 0:04:31.525 ****** 2025-10-09 10:45:43.571540 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.571551 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.571562 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.571573 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.571583 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.571594 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.571605 | orchestrator | 2025-10-09 10:45:43.571616 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-10-09 10:45:43.571627 | orchestrator | Thursday 09 October 2025 10:40:22 +0000 (0:00:01.323) 0:04:32.848 ****** 2025-10-09 10:45:43.571637 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.571648 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.571659 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.571670 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:45:43.571680 | orchestrator | 2025-10-09 10:45:43.571691 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-09 10:45:43.571702 | orchestrator | Thursday 09 October 2025 10:40:24 +0000 (0:00:01.839) 0:04:34.687 ****** 2025-10-09 10:45:43.571713 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-10-09 10:45:43.571724 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-10-09 10:45:43.571734 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-10-09 10:45:43.571745 | orchestrator | 2025-10-09 10:45:43.571756 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-09 10:45:43.571767 | orchestrator | Thursday 09 October 2025 10:40:25 +0000 (0:00:01.348) 0:04:36.036 ****** 2025-10-09 10:45:43.571777 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-10-09 10:45:43.571788 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-10-09 10:45:43.571799 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-10-09 10:45:43.571810 | orchestrator | 2025-10-09 10:45:43.571821 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-09 10:45:43.571832 | orchestrator | Thursday 09 October 2025 10:40:27 +0000 (0:00:02.168) 0:04:38.204 ****** 2025-10-09 10:45:43.571842 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-10-09 10:45:43.571853 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.571864 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-10-09 10:45:43.571889 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.571906 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-10-09 10:45:43.571929 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.571940 | orchestrator | 2025-10-09 10:45:43.571951 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-10-09 10:45:43.571962 | orchestrator | Thursday 09 October 2025 10:40:28 +0000 (0:00:00.703) 0:04:38.907 ****** 2025-10-09 10:45:43.571973 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:45:43.571983 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:45:43.571994 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-09 10:45:43.572005 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.572016 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-09 10:45:43.572027 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-09 10:45:43.572038 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:45:43.572054 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:45:43.572065 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.572075 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:45:43.572086 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:45:43.572097 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.572108 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-09 10:45:43.572132 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-09 10:45:43.572143 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-09 10:45:43.572154 | orchestrator | 2025-10-09 10:45:43.572165 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-10-09 10:45:43.572176 | orchestrator | Thursday 09 October 2025 10:40:30 +0000 (0:00:02.129) 0:04:41.037 ****** 2025-10-09 10:45:43.572187 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.572197 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.572208 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.572219 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.572230 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.572241 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.572252 | orchestrator | 2025-10-09 10:45:43.572263 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-10-09 10:45:43.572274 | orchestrator | Thursday 09 October 2025 10:40:32 +0000 (0:00:01.401) 0:04:42.439 ****** 2025-10-09 10:45:43.572284 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.572295 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.572306 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.572317 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.572328 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.572339 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.572349 | orchestrator | 2025-10-09 10:45:43.572360 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-10-09 10:45:43.572371 | orchestrator | Thursday 09 October 2025 10:40:34 +0000 (0:00:02.783) 0:04:45.223 ****** 2025-10-09 10:45:43.572389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572541 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572612 | orchestrator | 2025-10-09 10:45:43.572623 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:45:43.572634 | orchestrator | Thursday 09 October 2025 10:40:39 +0000 (0:00:04.720) 0:04:49.943 ****** 2025-10-09 10:45:43.572649 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:43.572661 | orchestrator | 2025-10-09 10:45:43.572672 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-10-09 10:45:43.572683 | orchestrator | Thursday 09 October 2025 10:40:42 +0000 (0:00:02.532) 0:04:52.475 ****** 2025-10-09 10:45:43.572694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572744 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.572918 | orchestrator | 2025-10-09 10:45:43.572929 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-10-09 10:45:43.572940 | orchestrator | Thursday 09 October 2025 10:40:48 +0000 (0:00:06.318) 0:04:58.794 ****** 2025-10-09 10:45:43.572951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.572967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.572979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.572999 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.573017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.573030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.573041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573052 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.573064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.573079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573091 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.573102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.573140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573152 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.573164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.573175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573186 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.573197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.573213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.573225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573242 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.573254 | orchestrator | 2025-10-09 10:45:43.573264 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-10-09 10:45:43.573275 | orchestrator | Thursday 09 October 2025 10:40:53 +0000 (0:00:04.770) 0:05:03.564 ****** 2025-10-09 10:45:43.573293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.573305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.573317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573328 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.573344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.573363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.573489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573505 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.573516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.573528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573539 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.573550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.573575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.573587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573598 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.573617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.573629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573640 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.573651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.573663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.573681 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.573692 | orchestrator | 2025-10-09 10:45:43.573703 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:45:43.573714 | orchestrator | Thursday 09 October 2025 10:40:58 +0000 (0:00:05.357) 0:05:08.922 ****** 2025-10-09 10:45:43.573730 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.573741 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.573752 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.573763 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:45:43.573774 | orchestrator | 2025-10-09 10:45:43.573785 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-10-09 10:45:43.573796 | orchestrator | Thursday 09 October 2025 10:41:00 +0000 (0:00:01.795) 0:05:10.718 ****** 2025-10-09 10:45:43.573806 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:45:43.573817 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:45:43.573828 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:45:43.573839 | orchestrator | 2025-10-09 10:45:43.573850 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-10-09 10:45:43.573861 | orchestrator | Thursday 09 October 2025 10:41:03 +0000 (0:00:03.313) 0:05:14.031 ****** 2025-10-09 10:45:43.573871 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:45:43.573882 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:45:43.573893 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:45:43.573904 | orchestrator | 2025-10-09 10:45:43.573914 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-10-09 10:45:43.573925 | orchestrator | Thursday 09 October 2025 10:41:07 +0000 (0:00:03.404) 0:05:17.436 ****** 2025-10-09 10:45:43.573936 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:45:43.573947 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:45:43.573958 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:45:43.573968 | orchestrator | 2025-10-09 10:45:43.573979 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-10-09 10:45:43.573990 | orchestrator | Thursday 09 October 2025 10:41:09 +0000 (0:00:02.017) 0:05:19.453 ****** 2025-10-09 10:45:43.574001 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:45:43.574012 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:45:43.574072 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:45:43.574083 | orchestrator | 2025-10-09 10:45:43.574094 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-10-09 10:45:43.574105 | orchestrator | Thursday 09 October 2025 10:41:10 +0000 (0:00:01.874) 0:05:21.328 ****** 2025-10-09 10:45:43.574116 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-09 10:45:43.574153 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-09 10:45:43.574165 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-09 10:45:43.574176 | orchestrator | 2025-10-09 10:45:43.574187 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-10-09 10:45:43.574197 | orchestrator | Thursday 09 October 2025 10:41:12 +0000 (0:00:01.930) 0:05:23.258 ****** 2025-10-09 10:45:43.574208 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-09 10:45:43.574219 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-09 10:45:43.574230 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-09 10:45:43.574241 | orchestrator | 2025-10-09 10:45:43.574252 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-10-09 10:45:43.574262 | orchestrator | Thursday 09 October 2025 10:41:14 +0000 (0:00:01.638) 0:05:24.897 ****** 2025-10-09 10:45:43.574273 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-09 10:45:43.574284 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-09 10:45:43.574295 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-09 10:45:43.574313 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-10-09 10:45:43.574324 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-10-09 10:45:43.574335 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-10-09 10:45:43.574346 | orchestrator | 2025-10-09 10:45:43.574357 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-10-09 10:45:43.574368 | orchestrator | Thursday 09 October 2025 10:41:22 +0000 (0:00:07.610) 0:05:32.507 ****** 2025-10-09 10:45:43.574378 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.574389 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.574400 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.574411 | orchestrator | 2025-10-09 10:45:43.574422 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-10-09 10:45:43.574433 | orchestrator | Thursday 09 October 2025 10:41:22 +0000 (0:00:00.563) 0:05:33.070 ****** 2025-10-09 10:45:43.574444 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.574454 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.574465 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.574476 | orchestrator | 2025-10-09 10:45:43.574487 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-10-09 10:45:43.574498 | orchestrator | Thursday 09 October 2025 10:41:23 +0000 (0:00:00.773) 0:05:33.843 ****** 2025-10-09 10:45:43.574509 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.574520 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.574530 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.574541 | orchestrator | 2025-10-09 10:45:43.574552 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-10-09 10:45:43.574563 | orchestrator | Thursday 09 October 2025 10:41:26 +0000 (0:00:02.704) 0:05:36.548 ****** 2025-10-09 10:45:43.574574 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-09 10:45:43.574586 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-09 10:45:43.574597 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-09 10:45:43.574613 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-09 10:45:43.574624 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-09 10:45:43.574635 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-09 10:45:43.574646 | orchestrator | 2025-10-09 10:45:43.574657 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-10-09 10:45:43.574668 | orchestrator | Thursday 09 October 2025 10:41:32 +0000 (0:00:06.438) 0:05:42.987 ****** 2025-10-09 10:45:43.574679 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:45:43.574690 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:45:43.574701 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:45:43.574711 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:45:43.574722 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.574733 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:45:43.574743 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.574754 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:45:43.574765 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.574776 | orchestrator | 2025-10-09 10:45:43.574787 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-10-09 10:45:43.574798 | orchestrator | Thursday 09 October 2025 10:41:38 +0000 (0:00:05.466) 0:05:48.453 ****** 2025-10-09 10:45:43.574819 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.574829 | orchestrator | 2025-10-09 10:45:43.574840 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-10-09 10:45:43.574851 | orchestrator | Thursday 09 October 2025 10:41:38 +0000 (0:00:00.154) 0:05:48.607 ****** 2025-10-09 10:45:43.574862 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.574873 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.574884 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.574894 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.574905 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.574916 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.574927 | orchestrator | 2025-10-09 10:45:43.574943 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-10-09 10:45:43.574954 | orchestrator | Thursday 09 October 2025 10:41:39 +0000 (0:00:00.785) 0:05:49.393 ****** 2025-10-09 10:45:43.574965 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:45:43.574976 | orchestrator | 2025-10-09 10:45:43.574987 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-10-09 10:45:43.574998 | orchestrator | Thursday 09 October 2025 10:41:39 +0000 (0:00:00.775) 0:05:50.169 ****** 2025-10-09 10:45:43.575009 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.575020 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.575031 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.575042 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.575052 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.575063 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.575074 | orchestrator | 2025-10-09 10:45:43.575085 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-10-09 10:45:43.575096 | orchestrator | Thursday 09 October 2025 10:41:40 +0000 (0:00:00.969) 0:05:51.139 ****** 2025-10-09 10:45:43.575107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575419 | orchestrator | 2025-10-09 10:45:43.575430 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-10-09 10:45:43.575441 | orchestrator | Thursday 09 October 2025 10:41:44 +0000 (0:00:04.191) 0:05:55.330 ****** 2025-10-09 10:45:43.575452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.575469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.575482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.575493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.575509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.575528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.575545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.575666 | orchestrator | 2025-10-09 10:45:43.575677 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-10-09 10:45:43.575688 | orchestrator | Thursday 09 October 2025 10:41:53 +0000 (0:00:08.414) 0:06:03.745 ****** 2025-10-09 10:45:43.575699 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.575710 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.575721 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.575739 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.575749 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.575760 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.575771 | orchestrator | 2025-10-09 10:45:43.575782 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-10-09 10:45:43.575793 | orchestrator | Thursday 09 October 2025 10:41:55 +0000 (0:00:01.597) 0:06:05.343 ****** 2025-10-09 10:45:43.575803 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-09 10:45:43.575814 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-09 10:45:43.575825 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-09 10:45:43.575836 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-09 10:45:43.575847 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-09 10:45:43.575857 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-09 10:45:43.575873 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-09 10:45:43.575884 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.575895 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-09 10:45:43.575906 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.575917 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-09 10:45:43.575928 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.575938 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-09 10:45:43.575949 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-09 10:45:43.575960 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-09 10:45:43.575971 | orchestrator | 2025-10-09 10:45:43.575981 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-10-09 10:45:43.575993 | orchestrator | Thursday 09 October 2025 10:41:59 +0000 (0:00:04.377) 0:06:09.720 ****** 2025-10-09 10:45:43.576003 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.576014 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.576024 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.576035 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.576046 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.576056 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.576067 | orchestrator | 2025-10-09 10:45:43.576078 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-10-09 10:45:43.576089 | orchestrator | Thursday 09 October 2025 10:42:00 +0000 (0:00:00.660) 0:06:10.380 ****** 2025-10-09 10:45:43.576100 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-09 10:45:43.576111 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-09 10:45:43.576180 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-09 10:45:43.576192 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-09 10:45:43.576203 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-09 10:45:43.576214 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-09 10:45:43.576225 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-09 10:45:43.576243 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-09 10:45:43.576254 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-09 10:45:43.576265 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-09 10:45:43.576276 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.576287 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-09 10:45:43.576298 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.576308 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-09 10:45:43.576319 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.576330 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:45:43.576341 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:45:43.576352 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:45:43.576362 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:45:43.576373 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:45:43.576384 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:45:43.576395 | orchestrator | 2025-10-09 10:45:43.576406 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-10-09 10:45:43.576417 | orchestrator | Thursday 09 October 2025 10:42:06 +0000 (0:00:06.278) 0:06:16.658 ****** 2025-10-09 10:45:43.576428 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:45:43.576439 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:45:43.576450 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:45:43.576460 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:45:43.576476 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-09 10:45:43.576487 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:45:43.576498 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-09 10:45:43.576509 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-09 10:45:43.576520 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:45:43.576530 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:45:43.576541 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:45:43.576552 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:45:43.576563 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-09 10:45:43.576574 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.576585 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-09 10:45:43.576595 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.576606 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:45:43.576623 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-09 10:45:43.576634 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.576644 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:45:43.576653 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:45:43.576663 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:45:43.576673 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:45:43.576687 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:45:43.576697 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:45:43.576706 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:45:43.576716 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:45:43.576726 | orchestrator | 2025-10-09 10:45:43.576736 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-10-09 10:45:43.576745 | orchestrator | Thursday 09 October 2025 10:42:14 +0000 (0:00:07.846) 0:06:24.504 ****** 2025-10-09 10:45:43.576755 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.576764 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.576774 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.576784 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.576793 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.576803 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.576813 | orchestrator | 2025-10-09 10:45:43.576822 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-10-09 10:45:43.576832 | orchestrator | Thursday 09 October 2025 10:42:14 +0000 (0:00:00.741) 0:06:25.246 ****** 2025-10-09 10:45:43.576842 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.576851 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.576861 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.576871 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.576880 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.576890 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.576899 | orchestrator | 2025-10-09 10:45:43.576909 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-10-09 10:45:43.576919 | orchestrator | Thursday 09 October 2025 10:42:15 +0000 (0:00:00.611) 0:06:25.857 ****** 2025-10-09 10:45:43.576929 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.576938 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.576948 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.576957 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.576967 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.576976 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.576986 | orchestrator | 2025-10-09 10:45:43.576996 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-10-09 10:45:43.577005 | orchestrator | Thursday 09 October 2025 10:42:18 +0000 (0:00:03.316) 0:06:29.174 ****** 2025-10-09 10:45:43.577020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.577039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.577049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.577060 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.577075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.577085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.577096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.577111 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.577139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:45:43.577150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:45:43.577166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.577176 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.577187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.577197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.577207 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.577217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.577237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.577247 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.577257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:45:43.577272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:45:43.577282 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.577292 | orchestrator | 2025-10-09 10:45:43.577302 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-10-09 10:45:43.577312 | orchestrator | Thursday 09 October 2025 10:42:21 +0000 (0:00:02.231) 0:06:31.405 ****** 2025-10-09 10:45:43.577322 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-10-09 10:45:43.577332 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-10-09 10:45:43.577341 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.577351 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-10-09 10:45:43.577361 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-10-09 10:45:43.577370 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.577380 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-10-09 10:45:43.577389 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-10-09 10:45:43.577399 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.577408 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-10-09 10:45:43.577418 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-10-09 10:45:43.577428 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.577437 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-10-09 10:45:43.577447 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-10-09 10:45:43.577456 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.577466 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-10-09 10:45:43.577476 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-10-09 10:45:43.577491 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.577501 | orchestrator | 2025-10-09 10:45:43.577511 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-10-09 10:45:43.577520 | orchestrator | Thursday 09 October 2025 10:42:22 +0000 (0:00:01.038) 0:06:32.444 ****** 2025-10-09 10:45:43.577530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:45:43.577720 | orchestrator | 2025-10-09 10:45:43.577730 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:45:43.577740 | orchestrator | Thursday 09 October 2025 10:42:25 +0000 (0:00:03.091) 0:06:35.535 ****** 2025-10-09 10:45:43.577750 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.577759 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.577818 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.577830 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.577839 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.577849 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.577858 | orchestrator | 2025-10-09 10:45:43.577868 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:45:43.577878 | orchestrator | Thursday 09 October 2025 10:42:26 +0000 (0:00:00.860) 0:06:36.395 ****** 2025-10-09 10:45:43.577887 | orchestrator | 2025-10-09 10:45:43.577897 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:45:43.577913 | orchestrator | Thursday 09 October 2025 10:42:26 +0000 (0:00:00.146) 0:06:36.542 ****** 2025-10-09 10:45:43.577922 | orchestrator | 2025-10-09 10:45:43.577932 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:45:43.577941 | orchestrator | Thursday 09 October 2025 10:42:26 +0000 (0:00:00.148) 0:06:36.690 ****** 2025-10-09 10:45:43.577951 | orchestrator | 2025-10-09 10:45:43.577960 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:45:43.577970 | orchestrator | Thursday 09 October 2025 10:42:26 +0000 (0:00:00.145) 0:06:36.836 ****** 2025-10-09 10:45:43.577979 | orchestrator | 2025-10-09 10:45:43.577989 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:45:43.577999 | orchestrator | Thursday 09 October 2025 10:42:26 +0000 (0:00:00.141) 0:06:36.977 ****** 2025-10-09 10:45:43.578008 | orchestrator | 2025-10-09 10:45:43.578042 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:45:43.578054 | orchestrator | Thursday 09 October 2025 10:42:26 +0000 (0:00:00.137) 0:06:37.115 ****** 2025-10-09 10:45:43.578064 | orchestrator | 2025-10-09 10:45:43.578073 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-10-09 10:45:43.578083 | orchestrator | Thursday 09 October 2025 10:42:27 +0000 (0:00:00.342) 0:06:37.458 ****** 2025-10-09 10:45:43.578093 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.578102 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:43.578112 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:43.578136 | orchestrator | 2025-10-09 10:45:43.578146 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-10-09 10:45:43.578155 | orchestrator | Thursday 09 October 2025 10:42:39 +0000 (0:00:12.672) 0:06:50.130 ****** 2025-10-09 10:45:43.578165 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.578175 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:43.578184 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:43.578194 | orchestrator | 2025-10-09 10:45:43.578203 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-10-09 10:45:43.578213 | orchestrator | Thursday 09 October 2025 10:42:54 +0000 (0:00:14.313) 0:07:04.443 ****** 2025-10-09 10:45:43.578223 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.578232 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.578242 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.578252 | orchestrator | 2025-10-09 10:45:43.578262 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-10-09 10:45:43.578271 | orchestrator | Thursday 09 October 2025 10:43:20 +0000 (0:00:26.113) 0:07:30.557 ****** 2025-10-09 10:45:43.578281 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.578290 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.578300 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.578310 | orchestrator | 2025-10-09 10:45:43.578319 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-10-09 10:45:43.578329 | orchestrator | Thursday 09 October 2025 10:43:50 +0000 (0:00:29.970) 0:08:00.528 ****** 2025-10-09 10:45:43.578339 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-10-09 10:45:43.578353 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-10-09 10:45:43.578363 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-10-09 10:45:43.578373 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.578383 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.578392 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.578402 | orchestrator | 2025-10-09 10:45:43.578412 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-10-09 10:45:43.578421 | orchestrator | Thursday 09 October 2025 10:43:56 +0000 (0:00:06.323) 0:08:06.851 ****** 2025-10-09 10:45:43.578437 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.578446 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.578456 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.578465 | orchestrator | 2025-10-09 10:45:43.578475 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-10-09 10:45:43.578485 | orchestrator | Thursday 09 October 2025 10:43:57 +0000 (0:00:00.720) 0:08:07.572 ****** 2025-10-09 10:45:43.578494 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:45:43.578504 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:45:43.578513 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:45:43.578523 | orchestrator | 2025-10-09 10:45:43.578533 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-10-09 10:45:43.578542 | orchestrator | Thursday 09 October 2025 10:44:18 +0000 (0:00:21.337) 0:08:28.910 ****** 2025-10-09 10:45:43.578552 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.578562 | orchestrator | 2025-10-09 10:45:43.578571 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-10-09 10:45:43.578581 | orchestrator | Thursday 09 October 2025 10:44:18 +0000 (0:00:00.142) 0:08:29.053 ****** 2025-10-09 10:45:43.578591 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.578601 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.578610 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.578620 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.578630 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.578639 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-10-09 10:45:43.578654 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:45:43.578664 | orchestrator | 2025-10-09 10:45:43.578674 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-10-09 10:45:43.578683 | orchestrator | Thursday 09 October 2025 10:44:44 +0000 (0:00:26.082) 0:08:55.135 ****** 2025-10-09 10:45:43.578693 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.578703 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.578712 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.578722 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.578731 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.578741 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.578751 | orchestrator | 2025-10-09 10:45:43.578760 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-10-09 10:45:43.578770 | orchestrator | Thursday 09 October 2025 10:44:55 +0000 (0:00:10.475) 0:09:05.611 ****** 2025-10-09 10:45:43.578779 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.578789 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.578799 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.578808 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.578818 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.578827 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-10-09 10:45:43.578837 | orchestrator | 2025-10-09 10:45:43.578847 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-09 10:45:43.578856 | orchestrator | Thursday 09 October 2025 10:45:00 +0000 (0:00:04.793) 0:09:10.404 ****** 2025-10-09 10:45:43.578866 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:45:43.578876 | orchestrator | 2025-10-09 10:45:43.578885 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-09 10:45:43.578895 | orchestrator | Thursday 09 October 2025 10:45:16 +0000 (0:00:16.207) 0:09:26.611 ****** 2025-10-09 10:45:43.578904 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:45:43.578914 | orchestrator | 2025-10-09 10:45:43.578924 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-10-09 10:45:43.578933 | orchestrator | Thursday 09 October 2025 10:45:17 +0000 (0:00:01.587) 0:09:28.199 ****** 2025-10-09 10:45:43.578952 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.578962 | orchestrator | 2025-10-09 10:45:43.578971 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-10-09 10:45:43.578981 | orchestrator | Thursday 09 October 2025 10:45:19 +0000 (0:00:01.759) 0:09:29.958 ****** 2025-10-09 10:45:43.578991 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:45:43.579000 | orchestrator | 2025-10-09 10:45:43.579010 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-10-09 10:45:43.579019 | orchestrator | Thursday 09 October 2025 10:45:33 +0000 (0:00:13.488) 0:09:43.447 ****** 2025-10-09 10:45:43.579029 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:45:43.579039 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:45:43.579048 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:45:43.579058 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:43.579068 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:45:43.579077 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:45:43.579087 | orchestrator | 2025-10-09 10:45:43.579096 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-10-09 10:45:43.579106 | orchestrator | 2025-10-09 10:45:43.579116 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-10-09 10:45:43.579173 | orchestrator | Thursday 09 October 2025 10:45:34 +0000 (0:00:01.857) 0:09:45.304 ****** 2025-10-09 10:45:43.579183 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:43.579193 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:43.579202 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:43.579212 | orchestrator | 2025-10-09 10:45:43.579222 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-10-09 10:45:43.579231 | orchestrator | 2025-10-09 10:45:43.579249 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-10-09 10:45:43.579259 | orchestrator | Thursday 09 October 2025 10:45:36 +0000 (0:00:01.161) 0:09:46.465 ****** 2025-10-09 10:45:43.579268 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.579278 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.579288 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.579297 | orchestrator | 2025-10-09 10:45:43.579307 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-10-09 10:45:43.579317 | orchestrator | 2025-10-09 10:45:43.579326 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-10-09 10:45:43.579336 | orchestrator | Thursday 09 October 2025 10:45:36 +0000 (0:00:00.575) 0:09:47.041 ****** 2025-10-09 10:45:43.579345 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-10-09 10:45:43.579355 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-10-09 10:45:43.579365 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-10-09 10:45:43.579374 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-10-09 10:45:43.579384 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-10-09 10:45:43.579393 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-10-09 10:45:43.579401 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:45:43.579409 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-10-09 10:45:43.579417 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-10-09 10:45:43.579425 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-10-09 10:45:43.579433 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-10-09 10:45:43.579441 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-10-09 10:45:43.579449 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-10-09 10:45:43.579457 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:45:43.579465 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-10-09 10:45:43.579473 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-10-09 10:45:43.579485 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-10-09 10:45:43.579498 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-10-09 10:45:43.579506 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-10-09 10:45:43.579514 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-10-09 10:45:43.579522 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:45:43.579530 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-10-09 10:45:43.579538 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-10-09 10:45:43.579546 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-10-09 10:45:43.579554 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-10-09 10:45:43.579562 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-10-09 10:45:43.579569 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-10-09 10:45:43.579577 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.579585 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-10-09 10:45:43.579593 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-10-09 10:45:43.579601 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-10-09 10:45:43.579609 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-10-09 10:45:43.579617 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-10-09 10:45:43.579625 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-10-09 10:45:43.579633 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.579641 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-10-09 10:45:43.579649 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-10-09 10:45:43.579657 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-10-09 10:45:43.579665 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-10-09 10:45:43.579673 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-10-09 10:45:43.579680 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-10-09 10:45:43.579688 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.579696 | orchestrator | 2025-10-09 10:45:43.579704 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-10-09 10:45:43.579712 | orchestrator | 2025-10-09 10:45:43.579720 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-10-09 10:45:43.579728 | orchestrator | Thursday 09 October 2025 10:45:38 +0000 (0:00:01.414) 0:09:48.456 ****** 2025-10-09 10:45:43.579736 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-10-09 10:45:43.579744 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-10-09 10:45:43.579752 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.579760 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-10-09 10:45:43.579767 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-10-09 10:45:43.579775 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.579783 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-10-09 10:45:43.579791 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-10-09 10:45:43.579799 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.579807 | orchestrator | 2025-10-09 10:45:43.579814 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-10-09 10:45:43.579822 | orchestrator | 2025-10-09 10:45:43.579830 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-10-09 10:45:43.579838 | orchestrator | Thursday 09 October 2025 10:45:38 +0000 (0:00:00.802) 0:09:49.258 ****** 2025-10-09 10:45:43.579850 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.579858 | orchestrator | 2025-10-09 10:45:43.579866 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-10-09 10:45:43.579874 | orchestrator | 2025-10-09 10:45:43.579886 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-10-09 10:45:43.579894 | orchestrator | Thursday 09 October 2025 10:45:39 +0000 (0:00:00.731) 0:09:49.990 ****** 2025-10-09 10:45:43.579902 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:43.579910 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:43.579918 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:43.579926 | orchestrator | 2025-10-09 10:45:43.579933 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:45:43.579942 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:45:43.579950 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-10-09 10:45:43.579958 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-10-09 10:45:43.579966 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-10-09 10:45:43.579974 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-09 10:45:43.579982 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-10-09 10:45:43.579993 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-10-09 10:45:43.580002 | orchestrator | 2025-10-09 10:45:43.580010 | orchestrator | 2025-10-09 10:45:43.580018 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:45:43.580026 | orchestrator | Thursday 09 October 2025 10:45:40 +0000 (0:00:00.489) 0:09:50.479 ****** 2025-10-09 10:45:43.580034 | orchestrator | =============================================================================== 2025-10-09 10:45:43.580042 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.37s 2025-10-09 10:45:43.580050 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.97s 2025-10-09 10:45:43.580058 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.11s 2025-10-09 10:45:43.580066 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 26.08s 2025-10-09 10:45:43.580074 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.34s 2025-10-09 10:45:43.580082 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.34s 2025-10-09 10:45:43.580090 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.64s 2025-10-09 10:45:43.580098 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.69s 2025-10-09 10:45:43.580106 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.54s 2025-10-09 10:45:43.580114 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.21s 2025-10-09 10:45:43.580136 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.08s 2025-10-09 10:45:43.580144 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.31s 2025-10-09 10:45:43.580152 | orchestrator | nova : Restart nova-api container -------------------------------------- 14.22s 2025-10-09 10:45:43.580160 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.05s 2025-10-09 10:45:43.580168 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.49s 2025-10-09 10:45:43.580176 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.67s 2025-10-09 10:45:43.580184 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.67s 2025-10-09 10:45:43.580197 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.48s 2025-10-09 10:45:43.580205 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.65s 2025-10-09 10:45:43.580213 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.59s 2025-10-09 10:45:43.580221 | orchestrator | 2025-10-09 10:45:43 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:43.580229 | orchestrator | 2025-10-09 10:45:43 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:43.580237 | orchestrator | 2025-10-09 10:45:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:46.613992 | orchestrator | 2025-10-09 10:45:46 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state STARTED 2025-10-09 10:45:46.618110 | orchestrator | 2025-10-09 10:45:46 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:46.618163 | orchestrator | 2025-10-09 10:45:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:49.655478 | orchestrator | 2025-10-09 10:45:49 | INFO  | Task 3f86b83c-e9e1-498b-8eb1-af13cf3c6aea is in state SUCCESS 2025-10-09 10:45:49.656704 | orchestrator | 2025-10-09 10:45:49.656737 | orchestrator | 2025-10-09 10:45:49.656748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:45:49.656759 | orchestrator | 2025-10-09 10:45:49.656769 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:45:49.656779 | orchestrator | Thursday 09 October 2025 10:43:33 +0000 (0:00:00.327) 0:00:00.327 ****** 2025-10-09 10:45:49.656789 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:49.656800 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:45:49.656838 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:45:49.656849 | orchestrator | 2025-10-09 10:45:49.656859 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:45:49.656883 | orchestrator | Thursday 09 October 2025 10:43:33 +0000 (0:00:00.324) 0:00:00.652 ****** 2025-10-09 10:45:49.656902 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-10-09 10:45:49.656913 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-10-09 10:45:49.656922 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-10-09 10:45:49.656932 | orchestrator | 2025-10-09 10:45:49.656942 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-10-09 10:45:49.656951 | orchestrator | 2025-10-09 10:45:49.656961 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-10-09 10:45:49.656971 | orchestrator | Thursday 09 October 2025 10:43:33 +0000 (0:00:00.463) 0:00:01.116 ****** 2025-10-09 10:45:49.656980 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:49.656991 | orchestrator | 2025-10-09 10:45:49.657000 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-10-09 10:45:49.657010 | orchestrator | Thursday 09 October 2025 10:43:34 +0000 (0:00:00.526) 0:00:01.642 ****** 2025-10-09 10:45:49.657023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657079 | orchestrator | 2025-10-09 10:45:49.657089 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-10-09 10:45:49.657099 | orchestrator | Thursday 09 October 2025 10:43:35 +0000 (0:00:00.859) 0:00:02.501 ****** 2025-10-09 10:45:49.657108 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-10-09 10:45:49.657148 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-10-09 10:45:49.657159 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:45:49.657168 | orchestrator | 2025-10-09 10:45:49.657178 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-10-09 10:45:49.657188 | orchestrator | Thursday 09 October 2025 10:43:36 +0000 (0:00:00.871) 0:00:03.373 ****** 2025-10-09 10:45:49.657198 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:49.657207 | orchestrator | 2025-10-09 10:45:49.657225 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-10-09 10:45:49.657235 | orchestrator | Thursday 09 October 2025 10:43:36 +0000 (0:00:00.756) 0:00:04.129 ****** 2025-10-09 10:45:49.657258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657297 | orchestrator | 2025-10-09 10:45:49.657309 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-10-09 10:45:49.657319 | orchestrator | Thursday 09 October 2025 10:43:38 +0000 (0:00:01.499) 0:00:05.629 ****** 2025-10-09 10:45:49.657330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:45:49.657342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:45:49.657353 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:49.657364 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:49.657386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:45:49.657398 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:49.657409 | orchestrator | 2025-10-09 10:45:49.657421 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-10-09 10:45:49.657431 | orchestrator | Thursday 09 October 2025 10:43:38 +0000 (0:00:00.448) 0:00:06.078 ****** 2025-10-09 10:45:49.657442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:45:49.657461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:45:49.657472 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:49.657484 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:49.657495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:45:49.657506 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:49.657518 | orchestrator | 2025-10-09 10:45:49.657528 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-10-09 10:45:49.657539 | orchestrator | Thursday 09 October 2025 10:43:39 +0000 (0:00:00.873) 0:00:06.951 ****** 2025-10-09 10:45:49.657550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657596 | orchestrator | 2025-10-09 10:45:49.657607 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-10-09 10:45:49.657624 | orchestrator | Thursday 09 October 2025 10:43:41 +0000 (0:00:01.342) 0:00:08.293 ****** 2025-10-09 10:45:49.657635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.657668 | orchestrator | 2025-10-09 10:45:49.657678 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-10-09 10:45:49.657687 | orchestrator | Thursday 09 October 2025 10:43:42 +0000 (0:00:01.384) 0:00:09.678 ****** 2025-10-09 10:45:49.657697 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:49.657707 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:49.657716 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:49.657726 | orchestrator | 2025-10-09 10:45:49.657735 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-10-09 10:45:49.657745 | orchestrator | Thursday 09 October 2025 10:43:42 +0000 (0:00:00.421) 0:00:10.099 ****** 2025-10-09 10:45:49.657754 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-09 10:45:49.657764 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-09 10:45:49.657774 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-09 10:45:49.657784 | orchestrator | 2025-10-09 10:45:49.657793 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-10-09 10:45:49.657802 | orchestrator | Thursday 09 October 2025 10:43:44 +0000 (0:00:01.188) 0:00:11.287 ****** 2025-10-09 10:45:49.657826 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-09 10:45:49.657841 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-09 10:45:49.657851 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-09 10:45:49.657866 | orchestrator | 2025-10-09 10:45:49.657876 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-10-09 10:45:49.657885 | orchestrator | Thursday 09 October 2025 10:43:45 +0000 (0:00:01.426) 0:00:12.713 ****** 2025-10-09 10:45:49.657895 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:45:49.657905 | orchestrator | 2025-10-09 10:45:49.657914 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-10-09 10:45:49.657923 | orchestrator | Thursday 09 October 2025 10:43:46 +0000 (0:00:00.807) 0:00:13.521 ****** 2025-10-09 10:45:49.657933 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-10-09 10:45:49.657942 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-10-09 10:45:49.657952 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:49.657961 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:45:49.657970 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:45:49.657980 | orchestrator | 2025-10-09 10:45:49.657989 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-10-09 10:45:49.657999 | orchestrator | Thursday 09 October 2025 10:43:46 +0000 (0:00:00.725) 0:00:14.247 ****** 2025-10-09 10:45:49.658008 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:49.658057 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:49.658069 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:49.658078 | orchestrator | 2025-10-09 10:45:49.658088 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-10-09 10:45:49.658097 | orchestrator | Thursday 09 October 2025 10:43:47 +0000 (0:00:00.522) 0:00:14.769 ****** 2025-10-09 10:45:49.658108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097849, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0013714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097849, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0013714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097849, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0013714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098028, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.026758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098028, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.026758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098028, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.026758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097927, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0045846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097927, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0045846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097927, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0045846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098032, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0283592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098032, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0283592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098032, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0283592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097959, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0122755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097959, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0122755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097959, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0122755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098016, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0245428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098016, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0245428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098016, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0245428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097847, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003440.9860952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097847, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003440.9860952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097847, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003440.9860952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097911, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0022895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097911, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0022895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097911, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0022895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097929, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0056515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097929, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0056515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097929, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0056515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097971, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0143154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097971, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0143154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097971, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0143154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098023, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0259776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098023, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0259776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098023, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0259776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097917, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.004081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097917, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.004081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097917, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.004081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097982, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0177104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097982, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0177104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097982, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0177104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097965, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.01382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097965, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.01382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097965, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.01382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097953, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0108914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097953, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0108914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097953, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0108914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097943, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0076647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097943, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0076647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097943, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0076647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097974, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0166183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097974, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0166183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097974, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0166183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097937, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0067697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097937, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0067697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097937, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0067697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098020, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.024883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098020, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.024883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098020, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.024883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098502, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1229572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098502, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1229572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098502, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1229572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.658999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098199, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0692139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098199, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0692139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098199, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0692139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098066, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0347676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098066, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0347676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098066, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0347676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098246, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0717335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098246, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0717335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098246, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0717335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098044, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0297995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098044, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0297995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098044, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0297995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098410, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1032872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098410, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1032872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098410, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1032872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098249, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1003196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098249, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1003196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098249, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1003196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098415, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1039329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098415, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1039329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098493, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1208296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098415, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1039329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098493, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1208296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098406, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1021645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098406, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1021645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098493, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1208296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098243, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0693595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098243, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0693595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098406, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1021645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098187, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0568023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098187, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0568023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098243, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0693595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098240, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0693595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098240, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0693595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098187, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0568023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098069, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0553937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098069, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0553937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098240, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0693595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098244, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0708663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098244, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0708663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098069, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0553937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098476, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1193597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098244, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0708663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098476, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1193597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098423, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1155365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098423, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1155365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098476, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1193597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098049, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0309253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098423, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1155365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098049, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0309253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098054, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.034246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098054, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.034246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098049, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.0309253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098398, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1012726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098054, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.034246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098398, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1012726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098419, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1042466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098398, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1012726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098419, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1042466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098419, 'dev': 134, 'nlink': 1, 'atime': 1759130426.0, 'mtime': 1759130426.0, 'ctime': 1760003441.1042466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:45:49.659811 | orchestrator | 2025-10-09 10:45:49.659821 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-10-09 10:45:49.659830 | orchestrator | Thursday 09 October 2025 10:44:27 +0000 (0:00:40.281) 0:00:55.051 ****** 2025-10-09 10:45:49.659841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.659851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.659861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.2.0.20250928', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:45:49.659871 | orchestrator | 2025-10-09 10:45:49.659880 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-10-09 10:45:49.659890 | orchestrator | Thursday 09 October 2025 10:44:29 +0000 (0:00:01.386) 0:00:56.437 ****** 2025-10-09 10:45:49.659910 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:49.659920 | orchestrator | 2025-10-09 10:45:49.659930 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-10-09 10:45:49.659943 | orchestrator | Thursday 09 October 2025 10:44:31 +0000 (0:00:02.772) 0:00:59.210 ****** 2025-10-09 10:45:49.659953 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:49.659963 | orchestrator | 2025-10-09 10:45:49.659972 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-09 10:45:49.659982 | orchestrator | Thursday 09 October 2025 10:44:34 +0000 (0:00:02.446) 0:01:01.657 ****** 2025-10-09 10:45:49.659991 | orchestrator | 2025-10-09 10:45:49.660001 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-09 10:45:49.660010 | orchestrator | Thursday 09 October 2025 10:44:34 +0000 (0:00:00.071) 0:01:01.729 ****** 2025-10-09 10:45:49.660020 | orchestrator | 2025-10-09 10:45:49.660029 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-09 10:45:49.660039 | orchestrator | Thursday 09 October 2025 10:44:34 +0000 (0:00:00.089) 0:01:01.819 ****** 2025-10-09 10:45:49.660048 | orchestrator | 2025-10-09 10:45:49.660071 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-10-09 10:45:49.660080 | orchestrator | Thursday 09 October 2025 10:44:34 +0000 (0:00:00.310) 0:01:02.130 ****** 2025-10-09 10:45:49.660090 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:49.660099 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:49.660109 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:49.660175 | orchestrator | 2025-10-09 10:45:49.660187 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-10-09 10:45:49.660197 | orchestrator | Thursday 09 October 2025 10:44:36 +0000 (0:00:02.008) 0:01:04.138 ****** 2025-10-09 10:45:49.660206 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:49.660216 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:49.660226 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-10-09 10:45:49.660236 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-10-09 10:45:49.660246 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-10-09 10:45:49.660255 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:49.660265 | orchestrator | 2025-10-09 10:45:49.660275 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-10-09 10:45:49.660284 | orchestrator | Thursday 09 October 2025 10:45:17 +0000 (0:00:40.345) 0:01:44.484 ****** 2025-10-09 10:45:49.660294 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:49.660304 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:49.660313 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:49.660323 | orchestrator | 2025-10-09 10:45:49.660332 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-10-09 10:45:49.660342 | orchestrator | Thursday 09 October 2025 10:45:41 +0000 (0:00:24.367) 0:02:08.851 ****** 2025-10-09 10:45:49.660352 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:49.660361 | orchestrator | 2025-10-09 10:45:49.660371 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-10-09 10:45:49.660381 | orchestrator | Thursday 09 October 2025 10:45:44 +0000 (0:00:02.486) 0:02:11.337 ****** 2025-10-09 10:45:49.660391 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:49.660400 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:49.660410 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:49.660419 | orchestrator | 2025-10-09 10:45:49.660429 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-10-09 10:45:49.660439 | orchestrator | Thursday 09 October 2025 10:45:44 +0000 (0:00:00.554) 0:02:11.892 ****** 2025-10-09 10:45:49.660450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-10-09 10:45:49.660469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-10-09 10:45:49.660479 | orchestrator | 2025-10-09 10:45:49.660489 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-10-09 10:45:49.660498 | orchestrator | Thursday 09 October 2025 10:45:47 +0000 (0:00:02.519) 0:02:14.412 ****** 2025-10-09 10:45:49.660508 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:49.660518 | orchestrator | 2025-10-09 10:45:49.660527 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:45:49.660538 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:45:49.660548 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:45:49.660558 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:45:49.660568 | orchestrator | 2025-10-09 10:45:49.660577 | orchestrator | 2025-10-09 10:45:49.660587 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:45:49.660597 | orchestrator | Thursday 09 October 2025 10:45:47 +0000 (0:00:00.248) 0:02:14.660 ****** 2025-10-09 10:45:49.660610 | orchestrator | =============================================================================== 2025-10-09 10:45:49.660626 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 40.35s 2025-10-09 10:45:49.660636 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 40.28s 2025-10-09 10:45:49.660646 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.37s 2025-10-09 10:45:49.660656 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.77s 2025-10-09 10:45:49.660665 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.52s 2025-10-09 10:45:49.660675 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.49s 2025-10-09 10:45:49.660684 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.45s 2025-10-09 10:45:49.660694 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.01s 2025-10-09 10:45:49.660703 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.50s 2025-10-09 10:45:49.660713 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.43s 2025-10-09 10:45:49.660722 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.39s 2025-10-09 10:45:49.660731 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.38s 2025-10-09 10:45:49.660741 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.34s 2025-10-09 10:45:49.660750 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.19s 2025-10-09 10:45:49.660760 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.87s 2025-10-09 10:45:49.660769 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.87s 2025-10-09 10:45:49.660779 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.86s 2025-10-09 10:45:49.660788 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2025-10-09 10:45:49.660798 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.76s 2025-10-09 10:45:49.660807 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2025-10-09 10:45:49.660827 | orchestrator | 2025-10-09 10:45:49 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:49.660837 | orchestrator | 2025-10-09 10:45:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:52.707912 | orchestrator | 2025-10-09 10:45:52 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:52.708009 | orchestrator | 2025-10-09 10:45:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:55.754358 | orchestrator | 2025-10-09 10:45:55 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:55.754462 | orchestrator | 2025-10-09 10:45:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:58.801762 | orchestrator | 2025-10-09 10:45:58 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:45:58.801880 | orchestrator | 2025-10-09 10:45:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:01.859984 | orchestrator | 2025-10-09 10:46:01 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:01.860091 | orchestrator | 2025-10-09 10:46:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:04.905946 | orchestrator | 2025-10-09 10:46:04 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:04.906102 | orchestrator | 2025-10-09 10:46:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:07.955156 | orchestrator | 2025-10-09 10:46:07 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:07.955256 | orchestrator | 2025-10-09 10:46:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:11.003400 | orchestrator | 2025-10-09 10:46:11 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:11.003498 | orchestrator | 2025-10-09 10:46:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:14.047535 | orchestrator | 2025-10-09 10:46:14 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:14.047644 | orchestrator | 2025-10-09 10:46:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:17.088438 | orchestrator | 2025-10-09 10:46:17 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:17.088538 | orchestrator | 2025-10-09 10:46:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:20.132294 | orchestrator | 2025-10-09 10:46:20 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:20.132398 | orchestrator | 2025-10-09 10:46:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:23.175373 | orchestrator | 2025-10-09 10:46:23 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:23.175471 | orchestrator | 2025-10-09 10:46:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:26.220226 | orchestrator | 2025-10-09 10:46:26 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:26.220345 | orchestrator | 2025-10-09 10:46:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:29.272855 | orchestrator | 2025-10-09 10:46:29 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:29.272952 | orchestrator | 2025-10-09 10:46:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:32.316887 | orchestrator | 2025-10-09 10:46:32 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:32.317000 | orchestrator | 2025-10-09 10:46:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:35.352625 | orchestrator | 2025-10-09 10:46:35 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:35.352752 | orchestrator | 2025-10-09 10:46:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:38.401678 | orchestrator | 2025-10-09 10:46:38 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:38.401783 | orchestrator | 2025-10-09 10:46:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:41.445440 | orchestrator | 2025-10-09 10:46:41 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:41.445536 | orchestrator | 2025-10-09 10:46:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:44.487459 | orchestrator | 2025-10-09 10:46:44 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:44.487549 | orchestrator | 2025-10-09 10:46:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:47.525251 | orchestrator | 2025-10-09 10:46:47 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:47.525344 | orchestrator | 2025-10-09 10:46:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:50.566150 | orchestrator | 2025-10-09 10:46:50 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:50.566249 | orchestrator | 2025-10-09 10:46:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:53.615452 | orchestrator | 2025-10-09 10:46:53 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:53.615577 | orchestrator | 2025-10-09 10:46:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:56.649180 | orchestrator | 2025-10-09 10:46:56 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:56.649287 | orchestrator | 2025-10-09 10:46:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:59.697395 | orchestrator | 2025-10-09 10:46:59 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:46:59.697482 | orchestrator | 2025-10-09 10:46:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:02.745859 | orchestrator | 2025-10-09 10:47:02 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:02.745968 | orchestrator | 2025-10-09 10:47:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:05.785271 | orchestrator | 2025-10-09 10:47:05 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:05.785386 | orchestrator | 2025-10-09 10:47:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:08.837954 | orchestrator | 2025-10-09 10:47:08 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:08.838103 | orchestrator | 2025-10-09 10:47:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:11.884438 | orchestrator | 2025-10-09 10:47:11 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:11.884543 | orchestrator | 2025-10-09 10:47:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:14.929621 | orchestrator | 2025-10-09 10:47:14 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:14.929723 | orchestrator | 2025-10-09 10:47:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:17.977462 | orchestrator | 2025-10-09 10:47:17 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:17.977555 | orchestrator | 2025-10-09 10:47:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:21.045612 | orchestrator | 2025-10-09 10:47:21 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:21.045720 | orchestrator | 2025-10-09 10:47:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:24.095073 | orchestrator | 2025-10-09 10:47:24 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:24.095202 | orchestrator | 2025-10-09 10:47:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:27.135928 | orchestrator | 2025-10-09 10:47:27 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:27.136040 | orchestrator | 2025-10-09 10:47:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:30.183811 | orchestrator | 2025-10-09 10:47:30 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:30.184233 | orchestrator | 2025-10-09 10:47:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:33.230395 | orchestrator | 2025-10-09 10:47:33 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:33.230735 | orchestrator | 2025-10-09 10:47:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:36.273419 | orchestrator | 2025-10-09 10:47:36 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:36.273512 | orchestrator | 2025-10-09 10:47:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:39.318764 | orchestrator | 2025-10-09 10:47:39 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:39.318870 | orchestrator | 2025-10-09 10:47:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:42.361403 | orchestrator | 2025-10-09 10:47:42 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:42.361496 | orchestrator | 2025-10-09 10:47:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:45.414321 | orchestrator | 2025-10-09 10:47:45 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:45.414457 | orchestrator | 2025-10-09 10:47:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:48.469196 | orchestrator | 2025-10-09 10:47:48 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:48.469314 | orchestrator | 2025-10-09 10:47:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:51.515654 | orchestrator | 2025-10-09 10:47:51 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:51.515763 | orchestrator | 2025-10-09 10:47:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:54.561684 | orchestrator | 2025-10-09 10:47:54 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:54.561784 | orchestrator | 2025-10-09 10:47:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:57.608090 | orchestrator | 2025-10-09 10:47:57 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:47:57.608873 | orchestrator | 2025-10-09 10:47:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:00.666470 | orchestrator | 2025-10-09 10:48:00 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:00.666551 | orchestrator | 2025-10-09 10:48:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:03.717873 | orchestrator | 2025-10-09 10:48:03 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:03.717973 | orchestrator | 2025-10-09 10:48:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:06.763085 | orchestrator | 2025-10-09 10:48:06 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:06.763203 | orchestrator | 2025-10-09 10:48:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:09.813156 | orchestrator | 2025-10-09 10:48:09 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:09.813262 | orchestrator | 2025-10-09 10:48:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:12.855819 | orchestrator | 2025-10-09 10:48:12 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:12.855918 | orchestrator | 2025-10-09 10:48:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:15.899018 | orchestrator | 2025-10-09 10:48:15 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:15.899140 | orchestrator | 2025-10-09 10:48:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:18.945665 | orchestrator | 2025-10-09 10:48:18 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:18.945766 | orchestrator | 2025-10-09 10:48:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:21.995543 | orchestrator | 2025-10-09 10:48:21 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:21.995645 | orchestrator | 2025-10-09 10:48:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:25.051377 | orchestrator | 2025-10-09 10:48:25 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:25.051444 | orchestrator | 2025-10-09 10:48:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:28.094467 | orchestrator | 2025-10-09 10:48:28 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:28.094580 | orchestrator | 2025-10-09 10:48:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:31.136863 | orchestrator | 2025-10-09 10:48:31 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:31.136954 | orchestrator | 2025-10-09 10:48:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:34.179266 | orchestrator | 2025-10-09 10:48:34 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:34.179368 | orchestrator | 2025-10-09 10:48:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:37.229770 | orchestrator | 2025-10-09 10:48:37 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:37.230598 | orchestrator | 2025-10-09 10:48:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:40.283645 | orchestrator | 2025-10-09 10:48:40 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:40.283748 | orchestrator | 2025-10-09 10:48:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:43.330502 | orchestrator | 2025-10-09 10:48:43 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state STARTED 2025-10-09 10:48:43.330600 | orchestrator | 2025-10-09 10:48:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:46.376378 | orchestrator | 2025-10-09 10:48:46 | INFO  | Task 34b4cd23-e467-4660-9b94-5d1ef3bdb0b2 is in state SUCCESS 2025-10-09 10:48:46.377743 | orchestrator | 2025-10-09 10:48:46.377779 | orchestrator | 2025-10-09 10:48:46.377793 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:48:46.377806 | orchestrator | 2025-10-09 10:48:46.377817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:48:46.377855 | orchestrator | Thursday 09 October 2025 10:43:49 +0000 (0:00:00.292) 0:00:00.292 ****** 2025-10-09 10:48:46.377867 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.377879 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:48:46.377968 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:48:46.378143 | orchestrator | 2025-10-09 10:48:46.378164 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:48:46.378205 | orchestrator | Thursday 09 October 2025 10:43:49 +0000 (0:00:00.333) 0:00:00.625 ****** 2025-10-09 10:48:46.378217 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-10-09 10:48:46.378228 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-10-09 10:48:46.378239 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-10-09 10:48:46.378250 | orchestrator | 2025-10-09 10:48:46.378261 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-10-09 10:48:46.378271 | orchestrator | 2025-10-09 10:48:46.378282 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:48:46.378293 | orchestrator | Thursday 09 October 2025 10:43:50 +0000 (0:00:00.591) 0:00:01.217 ****** 2025-10-09 10:48:46.378304 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:48:46.378315 | orchestrator | 2025-10-09 10:48:46.378326 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-10-09 10:48:46.378339 | orchestrator | Thursday 09 October 2025 10:43:51 +0000 (0:00:00.588) 0:00:01.805 ****** 2025-10-09 10:48:46.378351 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-10-09 10:48:46.378364 | orchestrator | 2025-10-09 10:48:46.378376 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-10-09 10:48:46.378388 | orchestrator | Thursday 09 October 2025 10:43:54 +0000 (0:00:03.472) 0:00:05.278 ****** 2025-10-09 10:48:46.378399 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-10-09 10:48:46.378884 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-10-09 10:48:46.378899 | orchestrator | 2025-10-09 10:48:46.378910 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-10-09 10:48:46.378921 | orchestrator | Thursday 09 October 2025 10:44:00 +0000 (0:00:06.249) 0:00:11.527 ****** 2025-10-09 10:48:46.378932 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:48:46.378944 | orchestrator | 2025-10-09 10:48:46.378955 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-10-09 10:48:46.378965 | orchestrator | Thursday 09 October 2025 10:44:04 +0000 (0:00:03.575) 0:00:15.103 ****** 2025-10-09 10:48:46.378991 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:48:46.379003 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-10-09 10:48:46.379014 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-10-09 10:48:46.379025 | orchestrator | 2025-10-09 10:48:46.379035 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-10-09 10:48:46.379046 | orchestrator | Thursday 09 October 2025 10:44:13 +0000 (0:00:09.191) 0:00:24.294 ****** 2025-10-09 10:48:46.379058 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:48:46.379310 | orchestrator | 2025-10-09 10:48:46.379324 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-10-09 10:48:46.379335 | orchestrator | Thursday 09 October 2025 10:44:17 +0000 (0:00:03.854) 0:00:28.149 ****** 2025-10-09 10:48:46.379346 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-10-09 10:48:46.379357 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-10-09 10:48:46.379368 | orchestrator | 2025-10-09 10:48:46.379379 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-10-09 10:48:46.379389 | orchestrator | Thursday 09 October 2025 10:44:25 +0000 (0:00:08.105) 0:00:36.254 ****** 2025-10-09 10:48:46.379415 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-10-09 10:48:46.379425 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-10-09 10:48:46.379436 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-10-09 10:48:46.379447 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-10-09 10:48:46.379458 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-10-09 10:48:46.379468 | orchestrator | 2025-10-09 10:48:46.379479 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:48:46.379490 | orchestrator | Thursday 09 October 2025 10:44:43 +0000 (0:00:17.654) 0:00:53.909 ****** 2025-10-09 10:48:46.379501 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:48:46.379511 | orchestrator | 2025-10-09 10:48:46.379522 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-10-09 10:48:46.379533 | orchestrator | Thursday 09 October 2025 10:44:43 +0000 (0:00:00.630) 0:00:54.539 ****** 2025-10-09 10:48:46.379544 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.379555 | orchestrator | 2025-10-09 10:48:46.379565 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-10-09 10:48:46.379576 | orchestrator | Thursday 09 October 2025 10:44:49 +0000 (0:00:05.580) 0:01:00.119 ****** 2025-10-09 10:48:46.379587 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.379598 | orchestrator | 2025-10-09 10:48:46.379609 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-10-09 10:48:46.379655 | orchestrator | Thursday 09 October 2025 10:44:54 +0000 (0:00:05.293) 0:01:05.413 ****** 2025-10-09 10:48:46.379668 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.379679 | orchestrator | 2025-10-09 10:48:46.379690 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-10-09 10:48:46.379700 | orchestrator | Thursday 09 October 2025 10:44:58 +0000 (0:00:03.667) 0:01:09.081 ****** 2025-10-09 10:48:46.379711 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-10-09 10:48:46.379722 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-10-09 10:48:46.379733 | orchestrator | 2025-10-09 10:48:46.379743 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-10-09 10:48:46.379754 | orchestrator | Thursday 09 October 2025 10:45:10 +0000 (0:00:12.398) 0:01:21.480 ****** 2025-10-09 10:48:46.379765 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-10-09 10:48:46.379776 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-10-09 10:48:46.379788 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-10-09 10:48:46.379799 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-10-09 10:48:46.379810 | orchestrator | 2025-10-09 10:48:46.379820 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-10-09 10:48:46.379831 | orchestrator | Thursday 09 October 2025 10:45:28 +0000 (0:00:18.100) 0:01:39.580 ****** 2025-10-09 10:48:46.379842 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.379852 | orchestrator | 2025-10-09 10:48:46.379863 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-10-09 10:48:46.379874 | orchestrator | Thursday 09 October 2025 10:45:33 +0000 (0:00:04.873) 0:01:44.453 ****** 2025-10-09 10:48:46.379884 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.379895 | orchestrator | 2025-10-09 10:48:46.379905 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-10-09 10:48:46.379916 | orchestrator | Thursday 09 October 2025 10:45:39 +0000 (0:00:05.690) 0:01:50.144 ****** 2025-10-09 10:48:46.379935 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:48:46.379946 | orchestrator | 2025-10-09 10:48:46.379956 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-10-09 10:48:46.379967 | orchestrator | Thursday 09 October 2025 10:45:39 +0000 (0:00:00.267) 0:01:50.411 ****** 2025-10-09 10:48:46.379978 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.379989 | orchestrator | 2025-10-09 10:48:46.379999 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:48:46.380010 | orchestrator | Thursday 09 October 2025 10:45:45 +0000 (0:00:05.910) 0:01:56.322 ****** 2025-10-09 10:48:46.380027 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:48:46.380038 | orchestrator | 2025-10-09 10:48:46.380049 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-10-09 10:48:46.380059 | orchestrator | Thursday 09 October 2025 10:45:46 +0000 (0:00:01.139) 0:01:57.461 ****** 2025-10-09 10:48:46.380070 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.380081 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.380109 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.380121 | orchestrator | 2025-10-09 10:48:46.380131 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-10-09 10:48:46.380142 | orchestrator | Thursday 09 October 2025 10:45:52 +0000 (0:00:05.342) 0:02:02.804 ****** 2025-10-09 10:48:46.380153 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.380163 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.380174 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.380185 | orchestrator | 2025-10-09 10:48:46.380196 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-10-09 10:48:46.380206 | orchestrator | Thursday 09 October 2025 10:45:56 +0000 (0:00:04.399) 0:02:07.204 ****** 2025-10-09 10:48:46.380217 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.380228 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.380238 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.380249 | orchestrator | 2025-10-09 10:48:46.380260 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-10-09 10:48:46.380271 | orchestrator | Thursday 09 October 2025 10:45:57 +0000 (0:00:00.795) 0:02:08.000 ****** 2025-10-09 10:48:46.380281 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.380292 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:48:46.380303 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:48:46.380314 | orchestrator | 2025-10-09 10:48:46.380324 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-10-09 10:48:46.380335 | orchestrator | Thursday 09 October 2025 10:45:59 +0000 (0:00:02.198) 0:02:10.198 ****** 2025-10-09 10:48:46.380346 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.380357 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.380367 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.380378 | orchestrator | 2025-10-09 10:48:46.380389 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-10-09 10:48:46.380400 | orchestrator | Thursday 09 October 2025 10:46:00 +0000 (0:00:01.265) 0:02:11.464 ****** 2025-10-09 10:48:46.380410 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.380421 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.380432 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.380442 | orchestrator | 2025-10-09 10:48:46.380453 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-10-09 10:48:46.380464 | orchestrator | Thursday 09 October 2025 10:46:02 +0000 (0:00:01.287) 0:02:12.751 ****** 2025-10-09 10:48:46.380475 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.380486 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.380496 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.380507 | orchestrator | 2025-10-09 10:48:46.380545 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-10-09 10:48:46.380558 | orchestrator | Thursday 09 October 2025 10:46:04 +0000 (0:00:02.161) 0:02:14.913 ****** 2025-10-09 10:48:46.380577 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.380588 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.380599 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.380609 | orchestrator | 2025-10-09 10:48:46.380620 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-10-09 10:48:46.380631 | orchestrator | Thursday 09 October 2025 10:46:05 +0000 (0:00:01.592) 0:02:16.505 ****** 2025-10-09 10:48:46.380642 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.380653 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:48:46.380663 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:48:46.380674 | orchestrator | 2025-10-09 10:48:46.380685 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-10-09 10:48:46.380696 | orchestrator | Thursday 09 October 2025 10:46:06 +0000 (0:00:00.911) 0:02:17.417 ****** 2025-10-09 10:48:46.380706 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:48:46.380717 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:48:46.380728 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.380738 | orchestrator | 2025-10-09 10:48:46.380749 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:48:46.380760 | orchestrator | Thursday 09 October 2025 10:46:09 +0000 (0:00:03.014) 0:02:20.431 ****** 2025-10-09 10:48:46.380771 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:48:46.380781 | orchestrator | 2025-10-09 10:48:46.380792 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-10-09 10:48:46.380803 | orchestrator | Thursday 09 October 2025 10:46:10 +0000 (0:00:00.585) 0:02:21.016 ****** 2025-10-09 10:48:46.380814 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.380824 | orchestrator | 2025-10-09 10:48:46.380835 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-10-09 10:48:46.380846 | orchestrator | Thursday 09 October 2025 10:46:14 +0000 (0:00:04.147) 0:02:25.164 ****** 2025-10-09 10:48:46.380856 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.380867 | orchestrator | 2025-10-09 10:48:46.380878 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-10-09 10:48:46.380889 | orchestrator | Thursday 09 October 2025 10:46:17 +0000 (0:00:03.501) 0:02:28.666 ****** 2025-10-09 10:48:46.380900 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-10-09 10:48:46.380911 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-10-09 10:48:46.380922 | orchestrator | 2025-10-09 10:48:46.380933 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-10-09 10:48:46.380944 | orchestrator | Thursday 09 October 2025 10:46:25 +0000 (0:00:07.235) 0:02:35.901 ****** 2025-10-09 10:48:46.380954 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.380965 | orchestrator | 2025-10-09 10:48:46.380976 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-10-09 10:48:46.380992 | orchestrator | Thursday 09 October 2025 10:46:28 +0000 (0:00:03.782) 0:02:39.684 ****** 2025-10-09 10:48:46.381003 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:48:46.381013 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:48:46.381024 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:48:46.381035 | orchestrator | 2025-10-09 10:48:46.381046 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-10-09 10:48:46.381056 | orchestrator | Thursday 09 October 2025 10:46:29 +0000 (0:00:00.334) 0:02:40.018 ****** 2025-10-09 10:48:46.381070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.381177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.381195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.381207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.381227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.381239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.381252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.381429 | orchestrator | 2025-10-09 10:48:46.381441 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-10-09 10:48:46.381452 | orchestrator | Thursday 09 October 2025 10:46:31 +0000 (0:00:02.650) 0:02:42.669 ****** 2025-10-09 10:48:46.381463 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:48:46.381474 | orchestrator | 2025-10-09 10:48:46.381485 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-10-09 10:48:46.381496 | orchestrator | Thursday 09 October 2025 10:46:32 +0000 (0:00:00.138) 0:02:42.807 ****** 2025-10-09 10:48:46.381507 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:48:46.381518 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:48:46.381529 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:48:46.381539 | orchestrator | 2025-10-09 10:48:46.381550 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-10-09 10:48:46.381559 | orchestrator | Thursday 09 October 2025 10:46:32 +0000 (0:00:00.526) 0:02:43.334 ****** 2025-10-09 10:48:46.381569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.381585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.381602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.381613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.381623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.381633 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:48:46.381667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.381678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.381698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.381714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.381725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.381735 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:48:46.381768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.381779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.381789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.381799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.381820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.381831 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:48:46.381840 | orchestrator | 2025-10-09 10:48:46.381850 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:48:46.381860 | orchestrator | Thursday 09 October 2025 10:46:33 +0000 (0:00:00.758) 0:02:44.093 ****** 2025-10-09 10:48:46.381870 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:48:46.381879 | orchestrator | 2025-10-09 10:48:46.381889 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-10-09 10:48:46.381898 | orchestrator | Thursday 09 October 2025 10:46:33 +0000 (0:00:00.583) 0:02:44.676 ****** 2025-10-09 10:48:46.381908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.381941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.381953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.381975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.381986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.381996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.382006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382183 | orchestrator | 2025-10-09 10:48:46.382193 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-10-09 10:48:46.382212 | orchestrator | Thursday 09 October 2025 10:46:39 +0000 (0:00:05.311) 0:02:49.988 ****** 2025-10-09 10:48:46.382222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.382237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.382247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.382284 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:48:46.382294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.382310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.382324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.382355 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:48:46.382372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.382390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.382400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.382434 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:48:46.382444 | orchestrator | 2025-10-09 10:48:46.382454 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-10-09 10:48:46.382464 | orchestrator | Thursday 09 October 2025 10:46:40 +0000 (0:00:00.957) 0:02:50.945 ****** 2025-10-09 10:48:46.382474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.382489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.382506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.382544 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:48:46.382554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.382564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.382580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.382617 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:48:46.382631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:48:46.382642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:48:46.382652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:48:46.382731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:48:46.382742 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:48:46.382752 | orchestrator | 2025-10-09 10:48:46.382761 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-10-09 10:48:46.382771 | orchestrator | Thursday 09 October 2025 10:46:41 +0000 (0:00:00.996) 0:02:51.942 ****** 2025-10-09 10:48:46.382781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.382796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.382806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.382828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'regist2025-10-09 10:48:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:48:46.382840 | orchestrator | ry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.382851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.382861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.382875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.382982 | orchestrator | 2025-10-09 10:48:46.382993 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-10-09 10:48:46.383002 | orchestrator | Thursday 09 October 2025 10:46:46 +0000 (0:00:05.194) 0:02:57.137 ****** 2025-10-09 10:48:46.383020 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-09 10:48:46.383030 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-09 10:48:46.383039 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-09 10:48:46.383049 | orchestrator | 2025-10-09 10:48:46.383058 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-10-09 10:48:46.383068 | orchestrator | Thursday 09 October 2025 10:46:48 +0000 (0:00:01.749) 0:02:58.886 ****** 2025-10-09 10:48:46.383084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.383109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.383125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.383135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.383152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.383162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.383178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383286 | orchestrator | 2025-10-09 10:48:46.383296 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-10-09 10:48:46.383306 | orchestrator | Thursday 09 October 2025 10:47:05 +0000 (0:00:16.979) 0:03:15.866 ****** 2025-10-09 10:48:46.383316 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.383325 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.383335 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.383345 | orchestrator | 2025-10-09 10:48:46.383354 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-10-09 10:48:46.383364 | orchestrator | Thursday 09 October 2025 10:47:06 +0000 (0:00:01.582) 0:03:17.448 ****** 2025-10-09 10:48:46.383374 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383383 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383393 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383402 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383412 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383426 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383445 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383455 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383464 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383474 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383483 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383493 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383503 | orchestrator | 2025-10-09 10:48:46.383512 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-10-09 10:48:46.383522 | orchestrator | Thursday 09 October 2025 10:47:12 +0000 (0:00:05.401) 0:03:22.850 ****** 2025-10-09 10:48:46.383532 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383541 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383551 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383560 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383570 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383579 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383589 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383599 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383608 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383618 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383628 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383637 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383647 | orchestrator | 2025-10-09 10:48:46.383657 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-10-09 10:48:46.383666 | orchestrator | Thursday 09 October 2025 10:47:17 +0000 (0:00:05.568) 0:03:28.419 ****** 2025-10-09 10:48:46.383676 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383685 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383695 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-09 10:48:46.383704 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383714 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383724 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-09 10:48:46.383738 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383748 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383758 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-09 10:48:46.383767 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383777 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383787 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-09 10:48:46.383796 | orchestrator | 2025-10-09 10:48:46.383806 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-10-09 10:48:46.383815 | orchestrator | Thursday 09 October 2025 10:47:22 +0000 (0:00:05.225) 0:03:33.644 ****** 2025-10-09 10:48:46.383825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.383846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.383857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:48:46.383867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.383884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.383895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:48:46.383911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.383998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.384013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:48:46.384023 | orchestrator | 2025-10-09 10:48:46.384033 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:48:46.384043 | orchestrator | Thursday 09 October 2025 10:47:26 +0000 (0:00:03.870) 0:03:37.515 ****** 2025-10-09 10:48:46.384052 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:48:46.384062 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:48:46.384072 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:48:46.384082 | orchestrator | 2025-10-09 10:48:46.384139 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-10-09 10:48:46.384151 | orchestrator | Thursday 09 October 2025 10:47:27 +0000 (0:00:00.326) 0:03:37.841 ****** 2025-10-09 10:48:46.384161 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384170 | orchestrator | 2025-10-09 10:48:46.384180 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-10-09 10:48:46.384190 | orchestrator | Thursday 09 October 2025 10:47:29 +0000 (0:00:02.230) 0:03:40.072 ****** 2025-10-09 10:48:46.384199 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384209 | orchestrator | 2025-10-09 10:48:46.384219 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-10-09 10:48:46.384228 | orchestrator | Thursday 09 October 2025 10:47:31 +0000 (0:00:02.199) 0:03:42.271 ****** 2025-10-09 10:48:46.384241 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384257 | orchestrator | 2025-10-09 10:48:46.384273 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-10-09 10:48:46.384290 | orchestrator | Thursday 09 October 2025 10:47:33 +0000 (0:00:02.331) 0:03:44.603 ****** 2025-10-09 10:48:46.384305 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384321 | orchestrator | 2025-10-09 10:48:46.384329 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-10-09 10:48:46.384337 | orchestrator | Thursday 09 October 2025 10:47:36 +0000 (0:00:02.436) 0:03:47.040 ****** 2025-10-09 10:48:46.384344 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384352 | orchestrator | 2025-10-09 10:48:46.384360 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-09 10:48:46.384368 | orchestrator | Thursday 09 October 2025 10:47:59 +0000 (0:00:22.883) 0:04:09.923 ****** 2025-10-09 10:48:46.384381 | orchestrator | 2025-10-09 10:48:46.384389 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-09 10:48:46.384397 | orchestrator | Thursday 09 October 2025 10:47:59 +0000 (0:00:00.073) 0:04:09.996 ****** 2025-10-09 10:48:46.384405 | orchestrator | 2025-10-09 10:48:46.384413 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-09 10:48:46.384425 | orchestrator | Thursday 09 October 2025 10:47:59 +0000 (0:00:00.081) 0:04:10.078 ****** 2025-10-09 10:48:46.384433 | orchestrator | 2025-10-09 10:48:46.384441 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-10-09 10:48:46.384449 | orchestrator | Thursday 09 October 2025 10:47:59 +0000 (0:00:00.069) 0:04:10.148 ****** 2025-10-09 10:48:46.384457 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384464 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.384472 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.384480 | orchestrator | 2025-10-09 10:48:46.384488 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-10-09 10:48:46.384495 | orchestrator | Thursday 09 October 2025 10:48:11 +0000 (0:00:11.691) 0:04:21.840 ****** 2025-10-09 10:48:46.384503 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384511 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.384519 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.384526 | orchestrator | 2025-10-09 10:48:46.384534 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-10-09 10:48:46.384542 | orchestrator | Thursday 09 October 2025 10:48:18 +0000 (0:00:07.430) 0:04:29.270 ****** 2025-10-09 10:48:46.384550 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384558 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.384566 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.384573 | orchestrator | 2025-10-09 10:48:46.384581 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-10-09 10:48:46.384589 | orchestrator | Thursday 09 October 2025 10:48:24 +0000 (0:00:05.924) 0:04:35.195 ****** 2025-10-09 10:48:46.384596 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384604 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.384612 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.384620 | orchestrator | 2025-10-09 10:48:46.384628 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-10-09 10:48:46.384635 | orchestrator | Thursday 09 October 2025 10:48:35 +0000 (0:00:10.865) 0:04:46.061 ****** 2025-10-09 10:48:46.384643 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:48:46.384651 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:48:46.384658 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:48:46.384666 | orchestrator | 2025-10-09 10:48:46.384674 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:48:46.384682 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:48:46.384690 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:48:46.384698 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:48:46.384706 | orchestrator | 2025-10-09 10:48:46.384713 | orchestrator | 2025-10-09 10:48:46.384721 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:48:46.384733 | orchestrator | Thursday 09 October 2025 10:48:45 +0000 (0:00:10.435) 0:04:56.496 ****** 2025-10-09 10:48:46.384741 | orchestrator | =============================================================================== 2025-10-09 10:48:46.384749 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.88s 2025-10-09 10:48:46.384757 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.10s 2025-10-09 10:48:46.384764 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.65s 2025-10-09 10:48:46.384777 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.98s 2025-10-09 10:48:46.384785 | orchestrator | octavia : Create security groups for octavia --------------------------- 12.40s 2025-10-09 10:48:46.384792 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.69s 2025-10-09 10:48:46.384800 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.87s 2025-10-09 10:48:46.384808 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.44s 2025-10-09 10:48:46.384815 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.19s 2025-10-09 10:48:46.384823 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.11s 2025-10-09 10:48:46.384831 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.43s 2025-10-09 10:48:46.384839 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.24s 2025-10-09 10:48:46.384846 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.25s 2025-10-09 10:48:46.384854 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.92s 2025-10-09 10:48:46.384862 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.91s 2025-10-09 10:48:46.384869 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.69s 2025-10-09 10:48:46.384877 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.58s 2025-10-09 10:48:46.384885 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.57s 2025-10-09 10:48:46.384892 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.40s 2025-10-09 10:48:46.384900 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.34s 2025-10-09 10:48:49.416389 | orchestrator | 2025-10-09 10:48:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:48:52.465908 | orchestrator | 2025-10-09 10:48:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:48:55.507528 | orchestrator | 2025-10-09 10:48:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:48:58.550819 | orchestrator | 2025-10-09 10:48:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:01.597159 | orchestrator | 2025-10-09 10:49:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:04.642779 | orchestrator | 2025-10-09 10:49:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:07.682537 | orchestrator | 2025-10-09 10:49:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:10.730706 | orchestrator | 2025-10-09 10:49:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:13.777860 | orchestrator | 2025-10-09 10:49:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:16.829793 | orchestrator | 2025-10-09 10:49:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:19.870074 | orchestrator | 2025-10-09 10:49:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:22.917756 | orchestrator | 2025-10-09 10:49:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:25.958893 | orchestrator | 2025-10-09 10:49:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:29.004539 | orchestrator | 2025-10-09 10:49:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:32.051525 | orchestrator | 2025-10-09 10:49:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:35.083623 | orchestrator | 2025-10-09 10:49:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:38.125138 | orchestrator | 2025-10-09 10:49:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:41.164150 | orchestrator | 2025-10-09 10:49:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:44.208031 | orchestrator | 2025-10-09 10:49:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:49:47.258282 | orchestrator | 2025-10-09 10:49:47.613404 | orchestrator | 2025-10-09 10:49:47.622326 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Oct 9 10:49:47 UTC 2025 2025-10-09 10:49:47.622358 | orchestrator | 2025-10-09 10:49:47.926277 | orchestrator | ok: Runtime: 0:36:11.698561 2025-10-09 10:49:48.170602 | 2025-10-09 10:49:48.170748 | TASK [Bootstrap services] 2025-10-09 10:49:48.929837 | orchestrator | 2025-10-09 10:49:48.929963 | orchestrator | # BOOTSTRAP 2025-10-09 10:49:48.929975 | orchestrator | 2025-10-09 10:49:48.929982 | orchestrator | + set -e 2025-10-09 10:49:48.929989 | orchestrator | + echo 2025-10-09 10:49:48.929996 | orchestrator | + echo '# BOOTSTRAP' 2025-10-09 10:49:48.930006 | orchestrator | + echo 2025-10-09 10:49:48.930052 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-10-09 10:49:48.939682 | orchestrator | + set -e 2025-10-09 10:49:48.939768 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-10-09 10:49:54.321590 | orchestrator | 2025-10-09 10:49:54 | INFO  | It takes a moment until task 59df10f3-da12-4b67-b869-370b3fd95257 (flavor-manager) has been started and output is visible here. 2025-10-09 10:50:02.649780 | orchestrator | 2025-10-09 10:49:57 | INFO  | Flavor SCS-1L-1 created 2025-10-09 10:50:02.649879 | orchestrator | 2025-10-09 10:49:57 | INFO  | Flavor SCS-1L-1-5 created 2025-10-09 10:50:02.649890 | orchestrator | 2025-10-09 10:49:58 | INFO  | Flavor SCS-1V-2 created 2025-10-09 10:50:02.649898 | orchestrator | 2025-10-09 10:49:58 | INFO  | Flavor SCS-1V-2-5 created 2025-10-09 10:50:02.649904 | orchestrator | 2025-10-09 10:49:58 | INFO  | Flavor SCS-1V-4 created 2025-10-09 10:50:02.649911 | orchestrator | 2025-10-09 10:49:58 | INFO  | Flavor SCS-1V-4-10 created 2025-10-09 10:50:02.649918 | orchestrator | 2025-10-09 10:49:58 | INFO  | Flavor SCS-1V-8 created 2025-10-09 10:50:02.649925 | orchestrator | 2025-10-09 10:49:59 | INFO  | Flavor SCS-1V-8-20 created 2025-10-09 10:50:02.649943 | orchestrator | 2025-10-09 10:49:59 | INFO  | Flavor SCS-2V-4 created 2025-10-09 10:50:02.649950 | orchestrator | 2025-10-09 10:49:59 | INFO  | Flavor SCS-2V-4-10 created 2025-10-09 10:50:02.649956 | orchestrator | 2025-10-09 10:49:59 | INFO  | Flavor SCS-2V-8 created 2025-10-09 10:50:02.649963 | orchestrator | 2025-10-09 10:49:59 | INFO  | Flavor SCS-2V-8-20 created 2025-10-09 10:50:02.649969 | orchestrator | 2025-10-09 10:50:00 | INFO  | Flavor SCS-2V-16 created 2025-10-09 10:50:02.649975 | orchestrator | 2025-10-09 10:50:00 | INFO  | Flavor SCS-2V-16-50 created 2025-10-09 10:50:02.649981 | orchestrator | 2025-10-09 10:50:00 | INFO  | Flavor SCS-4V-8 created 2025-10-09 10:50:02.649987 | orchestrator | 2025-10-09 10:50:00 | INFO  | Flavor SCS-4V-8-20 created 2025-10-09 10:50:02.649994 | orchestrator | 2025-10-09 10:50:00 | INFO  | Flavor SCS-4V-16 created 2025-10-09 10:50:02.650000 | orchestrator | 2025-10-09 10:50:00 | INFO  | Flavor SCS-4V-16-50 created 2025-10-09 10:50:02.650006 | orchestrator | 2025-10-09 10:50:00 | INFO  | Flavor SCS-4V-32 created 2025-10-09 10:50:02.650042 | orchestrator | 2025-10-09 10:50:00 | INFO  | Flavor SCS-4V-32-100 created 2025-10-09 10:50:02.650050 | orchestrator | 2025-10-09 10:50:01 | INFO  | Flavor SCS-8V-16 created 2025-10-09 10:50:02.650056 | orchestrator | 2025-10-09 10:50:01 | INFO  | Flavor SCS-8V-16-50 created 2025-10-09 10:50:02.650063 | orchestrator | 2025-10-09 10:50:01 | INFO  | Flavor SCS-8V-32 created 2025-10-09 10:50:02.650069 | orchestrator | 2025-10-09 10:50:01 | INFO  | Flavor SCS-8V-32-100 created 2025-10-09 10:50:02.650075 | orchestrator | 2025-10-09 10:50:01 | INFO  | Flavor SCS-16V-32 created 2025-10-09 10:50:02.650100 | orchestrator | 2025-10-09 10:50:01 | INFO  | Flavor SCS-16V-32-100 created 2025-10-09 10:50:02.650107 | orchestrator | 2025-10-09 10:50:02 | INFO  | Flavor SCS-2V-4-20s created 2025-10-09 10:50:02.650113 | orchestrator | 2025-10-09 10:50:02 | INFO  | Flavor SCS-4V-8-50s created 2025-10-09 10:50:02.650119 | orchestrator | 2025-10-09 10:50:02 | INFO  | Flavor SCS-8V-32-100s created 2025-10-09 10:50:05.154208 | orchestrator | 2025-10-09 10:50:05 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-10-09 10:50:15.262571 | orchestrator | 2025-10-09 10:50:15 | INFO  | Task d0fe82fd-d582-4fb0-9b5f-39eb84d2b0f5 (bootstrap-basic) was prepared for execution. 2025-10-09 10:50:15.262675 | orchestrator | 2025-10-09 10:50:15 | INFO  | It takes a moment until task d0fe82fd-d582-4fb0-9b5f-39eb84d2b0f5 (bootstrap-basic) has been started and output is visible here. 2025-10-09 10:51:19.690649 | orchestrator | 2025-10-09 10:51:19.690779 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-10-09 10:51:19.690813 | orchestrator | 2025-10-09 10:51:19.690835 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 10:51:19.690853 | orchestrator | Thursday 09 October 2025 10:50:19 +0000 (0:00:00.078) 0:00:00.078 ****** 2025-10-09 10:51:19.690873 | orchestrator | ok: [localhost] 2025-10-09 10:51:19.690892 | orchestrator | 2025-10-09 10:51:19.690909 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-10-09 10:51:19.690928 | orchestrator | Thursday 09 October 2025 10:50:21 +0000 (0:00:02.007) 0:00:02.086 ****** 2025-10-09 10:51:19.690946 | orchestrator | ok: [localhost] 2025-10-09 10:51:19.690964 | orchestrator | 2025-10-09 10:51:19.690984 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-10-09 10:51:19.691002 | orchestrator | Thursday 09 October 2025 10:50:32 +0000 (0:00:10.358) 0:00:12.444 ****** 2025-10-09 10:51:19.691020 | orchestrator | changed: [localhost] 2025-10-09 10:51:19.691038 | orchestrator | 2025-10-09 10:51:19.691057 | orchestrator | TASK [Get volume type local] *************************************************** 2025-10-09 10:51:19.691118 | orchestrator | Thursday 09 October 2025 10:50:40 +0000 (0:00:08.074) 0:00:20.518 ****** 2025-10-09 10:51:19.691137 | orchestrator | ok: [localhost] 2025-10-09 10:51:19.691156 | orchestrator | 2025-10-09 10:51:19.691172 | orchestrator | TASK [Create volume type local] ************************************************ 2025-10-09 10:51:19.691184 | orchestrator | Thursday 09 October 2025 10:50:47 +0000 (0:00:07.328) 0:00:27.847 ****** 2025-10-09 10:51:19.691200 | orchestrator | changed: [localhost] 2025-10-09 10:51:19.691211 | orchestrator | 2025-10-09 10:51:19.691223 | orchestrator | TASK [Create public network] *************************************************** 2025-10-09 10:51:19.691234 | orchestrator | Thursday 09 October 2025 10:50:54 +0000 (0:00:07.096) 0:00:34.943 ****** 2025-10-09 10:51:19.691245 | orchestrator | changed: [localhost] 2025-10-09 10:51:19.691256 | orchestrator | 2025-10-09 10:51:19.691267 | orchestrator | TASK [Set public network to default] ******************************************* 2025-10-09 10:51:19.691278 | orchestrator | Thursday 09 October 2025 10:50:59 +0000 (0:00:05.390) 0:00:40.334 ****** 2025-10-09 10:51:19.691289 | orchestrator | changed: [localhost] 2025-10-09 10:51:19.691299 | orchestrator | 2025-10-09 10:51:19.691310 | orchestrator | TASK [Create public subnet] **************************************************** 2025-10-09 10:51:19.691333 | orchestrator | Thursday 09 October 2025 10:51:06 +0000 (0:00:06.815) 0:00:47.149 ****** 2025-10-09 10:51:19.691344 | orchestrator | changed: [localhost] 2025-10-09 10:51:19.691355 | orchestrator | 2025-10-09 10:51:19.691366 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-10-09 10:51:19.691377 | orchestrator | Thursday 09 October 2025 10:51:11 +0000 (0:00:04.782) 0:00:51.931 ****** 2025-10-09 10:51:19.691387 | orchestrator | changed: [localhost] 2025-10-09 10:51:19.691398 | orchestrator | 2025-10-09 10:51:19.691409 | orchestrator | TASK [Create manager role] ***************************************************** 2025-10-09 10:51:19.691420 | orchestrator | Thursday 09 October 2025 10:51:15 +0000 (0:00:04.021) 0:00:55.952 ****** 2025-10-09 10:51:19.691431 | orchestrator | ok: [localhost] 2025-10-09 10:51:19.691442 | orchestrator | 2025-10-09 10:51:19.691453 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:51:19.691464 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:51:19.691476 | orchestrator | 2025-10-09 10:51:19.691487 | orchestrator | 2025-10-09 10:51:19.691498 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:51:19.691532 | orchestrator | Thursday 09 October 2025 10:51:19 +0000 (0:00:03.877) 0:00:59.830 ****** 2025-10-09 10:51:19.691543 | orchestrator | =============================================================================== 2025-10-09 10:51:19.691554 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.36s 2025-10-09 10:51:19.691565 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.07s 2025-10-09 10:51:19.691576 | orchestrator | Get volume type local --------------------------------------------------- 7.33s 2025-10-09 10:51:19.691586 | orchestrator | Create volume type local ------------------------------------------------ 7.10s 2025-10-09 10:51:19.691597 | orchestrator | Set public network to default ------------------------------------------- 6.82s 2025-10-09 10:51:19.691608 | orchestrator | Create public network --------------------------------------------------- 5.39s 2025-10-09 10:51:19.691619 | orchestrator | Create public subnet ---------------------------------------------------- 4.78s 2025-10-09 10:51:19.691629 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.02s 2025-10-09 10:51:19.691640 | orchestrator | Create manager role ----------------------------------------------------- 3.88s 2025-10-09 10:51:19.691651 | orchestrator | Gathering Facts --------------------------------------------------------- 2.01s 2025-10-09 10:51:22.224180 | orchestrator | 2025-10-09 10:51:22 | INFO  | It takes a moment until task fdc455b2-a928-411f-b2a6-ef7bff4bab58 (image-manager) has been started and output is visible here. 2025-10-09 10:52:04.921218 | orchestrator | 2025-10-09 10:51:25 | INFO  | Processing image 'Cirros 0.6.2' 2025-10-09 10:52:04.921385 | orchestrator | 2025-10-09 10:51:25 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-10-09 10:52:04.921408 | orchestrator | 2025-10-09 10:51:25 | INFO  | Importing image Cirros 0.6.2 2025-10-09 10:52:04.921421 | orchestrator | 2025-10-09 10:51:25 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-10-09 10:52:04.921446 | orchestrator | 2025-10-09 10:51:27 | INFO  | Waiting for image to leave queued state... 2025-10-09 10:52:04.921460 | orchestrator | 2025-10-09 10:51:29 | INFO  | Waiting for import to complete... 2025-10-09 10:52:04.921472 | orchestrator | 2025-10-09 10:51:40 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-10-09 10:52:04.921483 | orchestrator | 2025-10-09 10:51:40 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-10-09 10:52:04.921495 | orchestrator | 2025-10-09 10:51:40 | INFO  | Setting internal_version = 0.6.2 2025-10-09 10:52:04.921506 | orchestrator | 2025-10-09 10:51:40 | INFO  | Setting image_original_user = cirros 2025-10-09 10:52:04.921517 | orchestrator | 2025-10-09 10:51:40 | INFO  | Adding tag os:cirros 2025-10-09 10:52:04.921529 | orchestrator | 2025-10-09 10:51:40 | INFO  | Setting property architecture: x86_64 2025-10-09 10:52:04.921539 | orchestrator | 2025-10-09 10:51:40 | INFO  | Setting property hw_disk_bus: scsi 2025-10-09 10:52:04.921550 | orchestrator | 2025-10-09 10:51:41 | INFO  | Setting property hw_rng_model: virtio 2025-10-09 10:52:04.921561 | orchestrator | 2025-10-09 10:51:41 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-09 10:52:04.921572 | orchestrator | 2025-10-09 10:51:41 | INFO  | Setting property hw_watchdog_action: reset 2025-10-09 10:52:04.921583 | orchestrator | 2025-10-09 10:51:41 | INFO  | Setting property hypervisor_type: qemu 2025-10-09 10:52:04.921594 | orchestrator | 2025-10-09 10:51:41 | INFO  | Setting property os_distro: cirros 2025-10-09 10:52:04.921604 | orchestrator | 2025-10-09 10:51:42 | INFO  | Setting property os_purpose: minimal 2025-10-09 10:52:04.921615 | orchestrator | 2025-10-09 10:51:42 | INFO  | Setting property replace_frequency: never 2025-10-09 10:52:04.921653 | orchestrator | 2025-10-09 10:51:42 | INFO  | Setting property uuid_validity: none 2025-10-09 10:52:04.921667 | orchestrator | 2025-10-09 10:51:42 | INFO  | Setting property provided_until: none 2025-10-09 10:52:04.921691 | orchestrator | 2025-10-09 10:51:43 | INFO  | Setting property image_description: Cirros 2025-10-09 10:52:04.921710 | orchestrator | 2025-10-09 10:51:43 | INFO  | Setting property image_name: Cirros 2025-10-09 10:52:04.921722 | orchestrator | 2025-10-09 10:51:43 | INFO  | Setting property internal_version: 0.6.2 2025-10-09 10:52:04.921735 | orchestrator | 2025-10-09 10:51:43 | INFO  | Setting property image_original_user: cirros 2025-10-09 10:52:04.921747 | orchestrator | 2025-10-09 10:51:43 | INFO  | Setting property os_version: 0.6.2 2025-10-09 10:52:04.921761 | orchestrator | 2025-10-09 10:51:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-10-09 10:52:04.921776 | orchestrator | 2025-10-09 10:51:44 | INFO  | Setting property image_build_date: 2023-05-30 2025-10-09 10:52:04.921788 | orchestrator | 2025-10-09 10:51:44 | INFO  | Checking status of 'Cirros 0.6.2' 2025-10-09 10:52:04.921800 | orchestrator | 2025-10-09 10:51:44 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-10-09 10:52:04.921813 | orchestrator | 2025-10-09 10:51:44 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-10-09 10:52:04.921826 | orchestrator | 2025-10-09 10:51:45 | INFO  | Processing image 'Cirros 0.6.3' 2025-10-09 10:52:04.921839 | orchestrator | 2025-10-09 10:51:45 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-10-09 10:52:04.921852 | orchestrator | 2025-10-09 10:51:45 | INFO  | Importing image Cirros 0.6.3 2025-10-09 10:52:04.921864 | orchestrator | 2025-10-09 10:51:45 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-10-09 10:52:04.921877 | orchestrator | 2025-10-09 10:51:46 | INFO  | Waiting for image to leave queued state... 2025-10-09 10:52:04.921889 | orchestrator | 2025-10-09 10:51:48 | INFO  | Waiting for import to complete... 2025-10-09 10:52:04.921923 | orchestrator | 2025-10-09 10:51:59 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-10-09 10:52:04.921937 | orchestrator | 2025-10-09 10:51:59 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-10-09 10:52:04.921950 | orchestrator | 2025-10-09 10:51:59 | INFO  | Setting internal_version = 0.6.3 2025-10-09 10:52:04.921962 | orchestrator | 2025-10-09 10:51:59 | INFO  | Setting image_original_user = cirros 2025-10-09 10:52:04.921974 | orchestrator | 2025-10-09 10:51:59 | INFO  | Adding tag os:cirros 2025-10-09 10:52:04.921987 | orchestrator | 2025-10-09 10:51:59 | INFO  | Setting property architecture: x86_64 2025-10-09 10:52:04.921999 | orchestrator | 2025-10-09 10:52:00 | INFO  | Setting property hw_disk_bus: scsi 2025-10-09 10:52:04.922011 | orchestrator | 2025-10-09 10:52:00 | INFO  | Setting property hw_rng_model: virtio 2025-10-09 10:52:04.922096 | orchestrator | 2025-10-09 10:52:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-09 10:52:04.922108 | orchestrator | 2025-10-09 10:52:00 | INFO  | Setting property hw_watchdog_action: reset 2025-10-09 10:52:04.922119 | orchestrator | 2025-10-09 10:52:01 | INFO  | Setting property hypervisor_type: qemu 2025-10-09 10:52:04.922130 | orchestrator | 2025-10-09 10:52:01 | INFO  | Setting property os_distro: cirros 2025-10-09 10:52:04.922150 | orchestrator | 2025-10-09 10:52:01 | INFO  | Setting property os_purpose: minimal 2025-10-09 10:52:04.922161 | orchestrator | 2025-10-09 10:52:01 | INFO  | Setting property replace_frequency: never 2025-10-09 10:52:04.922172 | orchestrator | 2025-10-09 10:52:02 | INFO  | Setting property uuid_validity: none 2025-10-09 10:52:04.922182 | orchestrator | 2025-10-09 10:52:02 | INFO  | Setting property provided_until: none 2025-10-09 10:52:04.922193 | orchestrator | 2025-10-09 10:52:02 | INFO  | Setting property image_description: Cirros 2025-10-09 10:52:04.922203 | orchestrator | 2025-10-09 10:52:02 | INFO  | Setting property image_name: Cirros 2025-10-09 10:52:04.922214 | orchestrator | 2025-10-09 10:52:02 | INFO  | Setting property internal_version: 0.6.3 2025-10-09 10:52:04.922224 | orchestrator | 2025-10-09 10:52:03 | INFO  | Setting property image_original_user: cirros 2025-10-09 10:52:04.922235 | orchestrator | 2025-10-09 10:52:03 | INFO  | Setting property os_version: 0.6.3 2025-10-09 10:52:04.922246 | orchestrator | 2025-10-09 10:52:03 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-10-09 10:52:04.922256 | orchestrator | 2025-10-09 10:52:03 | INFO  | Setting property image_build_date: 2024-09-26 2025-10-09 10:52:04.922273 | orchestrator | 2025-10-09 10:52:04 | INFO  | Checking status of 'Cirros 0.6.3' 2025-10-09 10:52:04.922284 | orchestrator | 2025-10-09 10:52:04 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-10-09 10:52:04.922295 | orchestrator | 2025-10-09 10:52:04 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-10-09 10:52:05.282344 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-10-09 10:52:07.526258 | orchestrator | 2025-10-09 10:52:07 | INFO  | date: 2025-10-09 2025-10-09 10:52:07.526363 | orchestrator | 2025-10-09 10:52:07 | INFO  | image: octavia-amphora-haproxy-2024.2.20251009.qcow2 2025-10-09 10:52:07.526379 | orchestrator | 2025-10-09 10:52:07 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2 2025-10-09 10:52:07.526417 | orchestrator | 2025-10-09 10:52:07 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2.CHECKSUM 2025-10-09 10:52:07.596039 | orchestrator | 2025-10-09 10:52:07 | INFO  | checksum: a6fe8b4f836532cd1ebf8aa04ddce92c8b8a74168572318bf2952019682a3f85 2025-10-09 10:52:07.673708 | orchestrator | 2025-10-09 10:52:07 | INFO  | It takes a moment until task 30f4f512-1bc6-4b9f-865f-e5e412712799 (image-manager) has been started and output is visible here. 2025-10-09 10:53:09.391808 | orchestrator | 2025-10-09 10:52:09 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-10-09' 2025-10-09 10:53:09.391931 | orchestrator | 2025-10-09 10:52:09 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2: 200 2025-10-09 10:53:09.391953 | orchestrator | 2025-10-09 10:52:09 | INFO  | Importing image OpenStack Octavia Amphora 2025-10-09 2025-10-09 10:53:09.391966 | orchestrator | 2025-10-09 10:52:09 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2 2025-10-09 10:53:09.391979 | orchestrator | 2025-10-09 10:52:11 | INFO  | Waiting for image to leave queued state... 2025-10-09 10:53:09.391990 | orchestrator | 2025-10-09 10:52:13 | INFO  | Waiting for import to complete... 2025-10-09 10:53:09.392024 | orchestrator | 2025-10-09 10:52:23 | INFO  | Waiting for import to complete... 2025-10-09 10:53:09.392035 | orchestrator | 2025-10-09 10:52:33 | INFO  | Waiting for import to complete... 2025-10-09 10:53:09.392087 | orchestrator | 2025-10-09 10:52:43 | INFO  | Waiting for import to complete... 2025-10-09 10:53:09.392098 | orchestrator | 2025-10-09 10:52:53 | INFO  | Waiting for import to complete... 2025-10-09 10:53:09.392109 | orchestrator | 2025-10-09 10:53:03 | INFO  | Import of 'OpenStack Octavia Amphora 2025-10-09' successfully completed, reloading images 2025-10-09 10:53:09.392121 | orchestrator | 2025-10-09 10:53:04 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-10-09' 2025-10-09 10:53:09.392132 | orchestrator | 2025-10-09 10:53:04 | INFO  | Setting internal_version = 2025-10-09 2025-10-09 10:53:09.392143 | orchestrator | 2025-10-09 10:53:04 | INFO  | Setting image_original_user = ubuntu 2025-10-09 10:53:09.392155 | orchestrator | 2025-10-09 10:53:04 | INFO  | Adding tag amphora 2025-10-09 10:53:09.392166 | orchestrator | 2025-10-09 10:53:04 | INFO  | Adding tag os:ubuntu 2025-10-09 10:53:09.392176 | orchestrator | 2025-10-09 10:53:04 | INFO  | Setting property architecture: x86_64 2025-10-09 10:53:09.392187 | orchestrator | 2025-10-09 10:53:05 | INFO  | Setting property hw_disk_bus: scsi 2025-10-09 10:53:09.392197 | orchestrator | 2025-10-09 10:53:05 | INFO  | Setting property hw_rng_model: virtio 2025-10-09 10:53:09.392208 | orchestrator | 2025-10-09 10:53:05 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-09 10:53:09.392235 | orchestrator | 2025-10-09 10:53:05 | INFO  | Setting property hw_watchdog_action: reset 2025-10-09 10:53:09.392247 | orchestrator | 2025-10-09 10:53:06 | INFO  | Setting property hypervisor_type: qemu 2025-10-09 10:53:09.392257 | orchestrator | 2025-10-09 10:53:06 | INFO  | Setting property os_distro: ubuntu 2025-10-09 10:53:09.392268 | orchestrator | 2025-10-09 10:53:06 | INFO  | Setting property replace_frequency: quarterly 2025-10-09 10:53:09.392278 | orchestrator | 2025-10-09 10:53:06 | INFO  | Setting property uuid_validity: last-1 2025-10-09 10:53:09.392289 | orchestrator | 2025-10-09 10:53:06 | INFO  | Setting property provided_until: none 2025-10-09 10:53:09.392299 | orchestrator | 2025-10-09 10:53:07 | INFO  | Setting property os_purpose: network 2025-10-09 10:53:09.392310 | orchestrator | 2025-10-09 10:53:07 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-10-09 10:53:09.392321 | orchestrator | 2025-10-09 10:53:07 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-10-09 10:53:09.392332 | orchestrator | 2025-10-09 10:53:07 | INFO  | Setting property internal_version: 2025-10-09 2025-10-09 10:53:09.392344 | orchestrator | 2025-10-09 10:53:08 | INFO  | Setting property image_original_user: ubuntu 2025-10-09 10:53:09.392356 | orchestrator | 2025-10-09 10:53:08 | INFO  | Setting property os_version: 2025-10-09 2025-10-09 10:53:09.392369 | orchestrator | 2025-10-09 10:53:08 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2 2025-10-09 10:53:09.392383 | orchestrator | 2025-10-09 10:53:08 | INFO  | Setting property image_build_date: 2025-10-09 2025-10-09 10:53:09.392395 | orchestrator | 2025-10-09 10:53:09 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-10-09' 2025-10-09 10:53:09.392408 | orchestrator | 2025-10-09 10:53:09 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-10-09' 2025-10-09 10:53:09.392448 | orchestrator | 2025-10-09 10:53:09 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-10-09 10:53:09.392461 | orchestrator | 2025-10-09 10:53:09 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-10-09 10:53:09.392474 | orchestrator | 2025-10-09 10:53:09 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-10-09 10:53:09.392488 | orchestrator | 2025-10-09 10:53:09 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-10-09 10:53:09.851447 | orchestrator | ok: Runtime: 0:03:21.212774 2025-10-09 10:53:09.864623 | 2025-10-09 10:53:09.864725 | TASK [Run checks] 2025-10-09 10:53:10.554637 | orchestrator | + set -e 2025-10-09 10:53:10.554800 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 10:53:10.554822 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 10:53:10.554841 | orchestrator | ++ INTERACTIVE=false 2025-10-09 10:53:10.554853 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 10:53:10.554864 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 10:53:10.554890 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-09 10:53:10.556020 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-09 10:53:10.561522 | orchestrator | ++ export MANAGER_VERSION=9.4.0 2025-10-09 10:53:10.561546 | orchestrator | ++ MANAGER_VERSION=9.4.0 2025-10-09 10:53:10.561581 | orchestrator | + echo 2025-10-09 10:53:10.561596 | orchestrator | 2025-10-09 10:53:10.561607 | orchestrator | # CHECK 2025-10-09 10:53:10.561617 | orchestrator | 2025-10-09 10:53:10.561639 | orchestrator | + echo '# CHECK' 2025-10-09 10:53:10.561649 | orchestrator | + echo 2025-10-09 10:53:10.562121 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-09 10:53:10.563460 | orchestrator | ++ semver 9.4.0 5.0.0 2025-10-09 10:53:10.627335 | orchestrator | 2025-10-09 10:53:10.627368 | orchestrator | ## Containers @ testbed-manager 2025-10-09 10:53:10.627378 | orchestrator | 2025-10-09 10:53:10.627389 | orchestrator | + [[ 1 -eq -1 ]] 2025-10-09 10:53:10.627399 | orchestrator | + echo 2025-10-09 10:53:10.627409 | orchestrator | + echo '## Containers @ testbed-manager' 2025-10-09 10:53:10.627419 | orchestrator | + echo 2025-10-09 10:53:10.627429 | orchestrator | + osism container testbed-manager ps 2025-10-09 10:53:13.027247 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-09 10:53:13.027376 | orchestrator | 7a2e53f01874 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250928 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-10-09 10:53:13.027399 | orchestrator | 419a40ca5077 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250928 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-10-09 10:53:13.027412 | orchestrator | e3f65c512f16 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-10-09 10:53:13.027429 | orchestrator | 56fdc57696fa registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-10-09 10:53:13.027441 | orchestrator | e1d3c984e6b2 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-10-09 10:53:13.027453 | orchestrator | 9571e6b8349c registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-10-09 10:53:13.027470 | orchestrator | f2a516bb4021 registry.osism.tech/kolla/release/cron:3.0.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-09 10:53:13.027482 | orchestrator | 615b40101bf9 registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-10-09 10:53:13.027519 | orchestrator | 907c611d2083 registry.osism.tech/kolla/release/fluentd:5.0.7.20250928 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-09 10:53:13.027531 | orchestrator | f79c6b9bf4f3 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2025-10-09 10:53:13.027543 | orchestrator | 6a0e687e4d36 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2025-10-09 10:53:13.027554 | orchestrator | e89ab271283e registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 33 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2025-10-09 10:53:13.027565 | orchestrator | 10a31bc0c0fd registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-10-09 10:53:13.027582 | orchestrator | 40797eb67f07 registry.osism.tech/osism/inventory-reconciler:0.20250927.0 "/sbin/tini -- /entr…" About an hour ago Up 40 minutes (healthy) manager-inventory_reconciler-1 2025-10-09 10:53:13.027614 | orchestrator | b1331de420fa registry.osism.tech/osism/ceph-ansible:0.20250927.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2025-10-09 10:53:13.027627 | orchestrator | 45e8461998cd registry.osism.tech/osism/osism-kubernetes:0.20250927.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2025-10-09 10:53:13.027638 | orchestrator | 5a2d403c8045 registry.osism.tech/osism/osism-ansible:0.20250927.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2025-10-09 10:53:13.027649 | orchestrator | 592cc1700300 registry.osism.tech/osism/kolla-ansible:0.20250928.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2025-10-09 10:53:13.027661 | orchestrator | 2329330b74dc registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2025-10-09 10:53:13.027672 | orchestrator | 5c2a876ec508 registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-flower-1 2025-10-09 10:53:13.027684 | orchestrator | 1979a48eff1a registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-10-09 10:53:13.027695 | orchestrator | 4c090a37e41f registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 6379/tcp manager-redis-1 2025-10-09 10:53:13.027714 | orchestrator | b32c4377c215 registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-openstack-1 2025-10-09 10:53:13.027726 | orchestrator | fcd21b69edde registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-beat-1 2025-10-09 10:53:13.027738 | orchestrator | 018a8c1ad9d8 registry.osism.tech/osism/osism-frontend:0.20250919.0 "docker-entrypoint.s…" About an hour ago Up 41 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-10-09 10:53:13.027749 | orchestrator | e212ade094ea registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- sleep…" About an hour ago Up 41 minutes (healthy) osismclient 2025-10-09 10:53:13.027760 | orchestrator | b4f6cc9d17bb registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 3306/tcp manager-mariadb-1 2025-10-09 10:53:13.027771 | orchestrator | 0810da436660 registry.osism.tech/osism/osism:0.20250919.0 "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-listener-1 2025-10-09 10:53:13.027783 | orchestrator | 6b49cf71eea4 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-10-09 10:53:13.372992 | orchestrator | 2025-10-09 10:53:13.373079 | orchestrator | ## Images @ testbed-manager 2025-10-09 10:53:13.373092 | orchestrator | 2025-10-09 10:53:13.373103 | orchestrator | + echo 2025-10-09 10:53:13.373113 | orchestrator | + echo '## Images @ testbed-manager' 2025-10-09 10:53:13.373124 | orchestrator | + echo 2025-10-09 10:53:13.373133 | orchestrator | + osism container testbed-manager images 2025-10-09 10:53:15.776192 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-09 10:53:15.776290 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 7a8689ec37ba 8 hours ago 236MB 2025-10-09 10:53:15.776307 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 18 hours ago 742MB 2025-10-09 10:53:15.776335 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250928.0 12dbd243f1bc 10 days ago 591MB 2025-10-09 10:53:15.776347 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250928 c94bae3d79ab 10 days ago 283MB 2025-10-09 10:53:15.776359 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.6.0.20250928 0f1322ccc632 10 days ago 686MB 2025-10-09 10:53:15.776369 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250928 0d3a7fd647f6 10 days ago 597MB 2025-10-09 10:53:15.776381 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250928 b62e6dc3b26f 10 days ago 326MB 2025-10-09 10:53:15.776391 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250928 dc036ed891c6 10 days ago 857MB 2025-10-09 10:53:15.776402 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250928 49758bb5bd4b 10 days ago 422MB 2025-10-09 10:53:15.776413 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250928 ca935c8cced7 10 days ago 375MB 2025-10-09 10:53:15.776444 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250928 7e93e232f7b3 10 days ago 323MB 2025-10-09 10:53:15.776456 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250927.0 4ffdf65f3209 11 days ago 595MB 2025-10-09 10:53:15.776467 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250927.0 e85b2bfff740 11 days ago 544MB 2025-10-09 10:53:15.776478 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250927.0 9f9c298da75e 11 days ago 1.23GB 2025-10-09 10:53:15.776489 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250927.0 b645ecec677c 11 days ago 315MB 2025-10-09 10:53:15.776500 | orchestrator | registry.osism.tech/osism/homer v25.08.1 849a6c620511 12 days ago 11.5MB 2025-10-09 10:53:15.776511 | orchestrator | registry.osism.tech/osism/osism 0.20250919.0 0e941d014172 2 weeks ago 325MB 2025-10-09 10:53:15.776522 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20250919.0 b7e597375e69 2 weeks ago 236MB 2025-10-09 10:53:15.776533 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 weeks ago 275MB 2025-10-09 10:53:15.776544 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 885f31622e75 2 months ago 336MB 2025-10-09 10:53:15.776555 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 2 months ago 226MB 2025-10-09 10:53:15.776566 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 3 months ago 41.4MB 2025-10-09 10:53:15.776577 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 5 months ago 453MB 2025-10-09 10:53:15.776588 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 16 months ago 146MB 2025-10-09 10:53:16.126403 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-09 10:53:16.126952 | orchestrator | ++ semver 9.4.0 5.0.0 2025-10-09 10:53:16.189033 | orchestrator | 2025-10-09 10:53:16.189102 | orchestrator | ## Containers @ testbed-node-0 2025-10-09 10:53:16.189115 | orchestrator | 2025-10-09 10:53:16.189127 | orchestrator | + [[ 1 -eq -1 ]] 2025-10-09 10:53:16.189138 | orchestrator | + echo 2025-10-09 10:53:16.189149 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-10-09 10:53:16.189161 | orchestrator | + echo 2025-10-09 10:53:16.189172 | orchestrator | + osism container testbed-node-0 ps 2025-10-09 10:53:18.680425 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-09 10:53:18.680512 | orchestrator | 47d9b37c0178 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-09 10:53:18.680527 | orchestrator | 13e6b5a5fbd2 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-09 10:53:18.680539 | orchestrator | 02962753b09b registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-10-09 10:53:18.680551 | orchestrator | 6a8a5ef31ff4 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-10-09 10:53:18.680562 | orchestrator | 27526402e833 registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-09 10:53:18.680575 | orchestrator | 9f4f52c235a9 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-10-09 10:53:18.680612 | orchestrator | 08c7f815154c registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-10-09 10:53:18.680625 | orchestrator | 5f960b71c166 registry.osism.tech/kolla/release/grafana:12.2.0.20250928 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-10-09 10:53:18.680636 | orchestrator | 5b713ee82fa1 registry.osism.tech/kolla/release/placement-api:12.0.1.20250928 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-10-09 10:53:18.680647 | orchestrator | 922b49fdae67 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-10-09 10:53:18.680658 | orchestrator | 5e24cd1e0efc registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-10-09 10:53:18.680669 | orchestrator | ecd23806477d registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-10-09 10:53:18.680680 | orchestrator | bb46b9595347 registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-10-09 10:53:18.680691 | orchestrator | d7d43d156c8a registry.osism.tech/kolla/release/designate-central:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-10-09 10:53:18.680702 | orchestrator | bf3dbad60e72 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-10-09 10:53:18.680713 | orchestrator | f54e63165706 registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-10-09 10:53:18.680724 | orchestrator | 03af262c072d registry.osism.tech/kolla/release/designate-api:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-10-09 10:53:18.680735 | orchestrator | 055bd7b80ee9 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-10-09 10:53:18.680746 | orchestrator | f602c0bd149b registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-10-09 10:53:18.680775 | orchestrator | 8d74952da18f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-10-09 10:53:18.680786 | orchestrator | 480eb7f80b0e registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-10-09 10:53:18.680798 | orchestrator | dc715dff2eb4 registry.osism.tech/kolla/release/nova-api:30.1.1.20250928 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-10-09 10:53:18.680809 | orchestrator | 1fb791bb2713 registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-10-09 10:53:18.680819 | orchestrator | 6cd3fefec325 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-10-09 10:53:18.680861 | orchestrator | 8d953e4182da registry.osism.tech/kolla/release/glance-api:29.0.1.20250928 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-10-09 10:53:18.680873 | orchestrator | 4f621387a167 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-10-09 10:53:18.680884 | orchestrator | 6463c1398ff8 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-10-09 10:53:18.680895 | orchestrator | 5112ec15d43e registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-10-09 10:53:18.680910 | orchestrator | 3ec911d67796 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-10-09 10:53:18.680921 | orchestrator | 605021be8656 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-10-09 10:53:18.680932 | orchestrator | 501c03dd25e9 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-10-09 10:53:18.680943 | orchestrator | 61930d6a84ec registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-10-09 10:53:18.680954 | orchestrator | f39acd4ea697 registry.osism.tech/kolla/release/keystone:26.0.1.20250928 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-10-09 10:53:18.680965 | orchestrator | 01c35a133f17 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-10-09 10:53:18.680976 | orchestrator | fe44b353a7a2 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-10-09 10:53:18.680991 | orchestrator | d3873924f68c registry.osism.tech/kolla/release/horizon:25.1.2.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-10-09 10:53:18.681003 | orchestrator | 0cf07ce86885 registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-10-09 10:53:18.681014 | orchestrator | 6e43340bb303 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-10-09 10:53:18.681025 | orchestrator | 40190be47a0b registry.osism.tech/kolla/release/opensearch:2.19.3.20250928 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-10-09 10:53:18.681056 | orchestrator | a2a4af821334 registry.osism.tech/kolla/release/keepalived:2.2.8.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-10-09 10:53:18.681076 | orchestrator | f88b9540ddac registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-10-09 10:53:18.681088 | orchestrator | e4a64455e610 registry.osism.tech/kolla/release/proxysql:2.7.3.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-10-09 10:53:18.681109 | orchestrator | 7c45034da9d7 registry.osism.tech/kolla/release/haproxy:2.8.5.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-10-09 10:53:18.681121 | orchestrator | fea16d1653b6 registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-10-09 10:53:18.681132 | orchestrator | 6fb53a7d7e7e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-10-09 10:53:18.681143 | orchestrator | 5f566091325f registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-10-09 10:53:18.681154 | orchestrator | 98ad3cead297 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-10-09 10:53:18.681165 | orchestrator | d215e9b6bbed registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-09 10:53:18.681176 | orchestrator | 4757f70341ad registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-10-09 10:53:18.681188 | orchestrator | 9a07b986ba1b registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-09 10:53:18.681199 | orchestrator | 9e88c12e1274 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-10-09 10:53:18.681210 | orchestrator | abdf308af5d5 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-10-09 10:53:18.681221 | orchestrator | 9f10c29c4883 registry.osism.tech/kolla/release/redis:7.0.15.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-09 10:53:18.681232 | orchestrator | adf567752309 registry.osism.tech/kolla/release/memcached:1.6.24.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-09 10:53:18.681243 | orchestrator | a46786e1ccc6 registry.osism.tech/kolla/release/cron:3.0.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-09 10:53:18.681254 | orchestrator | b9b634fb2c98 registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928 "dumb-init --single-…" 32 minutes ago Up 31 minutes kolla_toolbox 2025-10-09 10:53:18.681265 | orchestrator | 981de719906d registry.osism.tech/kolla/release/fluentd:5.0.7.20250928 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-09 10:53:19.016727 | orchestrator | 2025-10-09 10:53:19.016811 | orchestrator | ## Images @ testbed-node-0 2025-10-09 10:53:19.016826 | orchestrator | 2025-10-09 10:53:19.016837 | orchestrator | + echo 2025-10-09 10:53:19.016847 | orchestrator | + echo '## Images @ testbed-node-0' 2025-10-09 10:53:19.016858 | orchestrator | + echo 2025-10-09 10:53:19.016869 | orchestrator | + osism container testbed-node-0 images 2025-10-09 10:53:21.529021 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-09 10:53:21.529148 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.5.20250928 5950a9bcfecc 10 days ago 292MB 2025-10-09 10:53:21.529163 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250928 71703156dd3c 10 days ago 383MB 2025-10-09 10:53:21.529194 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250928 c94bae3d79ab 10 days ago 283MB 2025-10-09 10:53:21.529206 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.3.20250928 aa31eec89994 10 days ago 1.53GB 2025-10-09 10:53:21.529217 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.3.20250928 adfda0ced572 10 days ago 1.55GB 2025-10-09 10:53:21.529228 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250928 e844d8195d98 10 days ago 340MB 2025-10-09 10:53:21.529239 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.6.0.20250928 0f1322ccc632 10 days ago 686MB 2025-10-09 10:53:21.529264 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20250928 95d1e32ddd75 10 days ago 294MB 2025-10-09 10:53:21.529276 | orchestrator | registry.osism.tech/kolla/release/grafana 12.2.0.20250928 2a5ec5c4b075 10 days ago 1.02GB 2025-10-09 10:53:21.529287 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250928 0d3a7fd647f6 10 days ago 597MB 2025-10-09 10:53:21.529297 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20250928 7ffb11314943 10 days ago 284MB 2025-10-09 10:53:21.529309 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.0.20250928 f0c00933987a 10 days ago 300MB 2025-10-09 10:53:21.529320 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.0.20250928 2874b2f5a67e 10 days ago 300MB 2025-10-09 10:53:21.529331 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.14.20250928 e17a6d3f86d1 10 days ago 465MB 2025-10-09 10:53:21.529342 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250928 b7caaf91c86f 10 days ago 291MB 2025-10-09 10:53:21.529352 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250928 98cf8ae628d2 10 days ago 291MB 2025-10-09 10:53:21.529363 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20250928 b2ebbbc6f157 10 days ago 1.16GB 2025-10-09 10:53:21.529374 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250928 d7156b8d95d0 10 days ago 319MB 2025-10-09 10:53:21.529385 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250928 8f8778c6faa4 10 days ago 316MB 2025-10-09 10:53:21.529395 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250928 af58545f0010 10 days ago 310MB 2025-10-09 10:53:21.529406 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250928 ca935c8cced7 10 days ago 375MB 2025-10-09 10:53:21.529417 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250928 7e93e232f7b3 10 days ago 323MB 2025-10-09 10:53:21.529428 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.0.20250928 d2b8f6f9b5fb 10 days ago 307MB 2025-10-09 10:53:21.529439 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.0.20250928 fc232d670c5e 10 days ago 307MB 2025-10-09 10:53:21.529450 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.0.20250928 4bab05b921ed 10 days ago 307MB 2025-10-09 10:53:21.529460 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.0.20250928 a8971421e2d8 10 days ago 307MB 2025-10-09 10:53:21.529471 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20250928 ea71d0b5d323 10 days ago 1.07GB 2025-10-09 10:53:21.529482 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20250928 d1aa7da3362f 10 days ago 1.07GB 2025-10-09 10:53:21.529501 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20250928 e92e277e4084 10 days ago 1.05GB 2025-10-09 10:53:21.529512 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20250928 190c13e67783 10 days ago 1.05GB 2025-10-09 10:53:21.529523 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20250928 2e69b9b4f213 10 days ago 1.05GB 2025-10-09 10:53:21.529551 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250928 ecbe5b078c99 10 days ago 1.01GB 2025-10-09 10:53:21.529576 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250928 a17655fc7d18 10 days ago 1.07GB 2025-10-09 10:53:21.529588 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250928 e2639eaace7b 10 days ago 1.1GB 2025-10-09 10:53:21.529599 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250928 3f2655c452c5 10 days ago 1.06GB 2025-10-09 10:53:21.529609 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250928 79b8bb6b6e8e 10 days ago 1.06GB 2025-10-09 10:53:21.529620 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250928 c35fa0908857 10 days ago 1.12GB 2025-10-09 10:53:21.529631 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250928 c4472b2a9533 10 days ago 1.18GB 2025-10-09 10:53:21.529641 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250928 7e385626ede5 10 days ago 1.42GB 2025-10-09 10:53:21.529652 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250928 c30e3eb83eaa 10 days ago 1.42GB 2025-10-09 10:53:21.529663 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.1.1.20250928 a154fbd594b5 10 days ago 1.22GB 2025-10-09 10:53:21.529674 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.1.1.20250928 c0d1ffbe7d0e 10 days ago 1.22GB 2025-10-09 10:53:21.529685 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.1.1.20250928 74c49b646ba9 10 days ago 1.22GB 2025-10-09 10:53:21.529695 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.1.1.20250928 2e8217e0d99d 10 days ago 1.38GB 2025-10-09 10:53:21.529706 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250928 613ca8e781c5 10 days ago 1.01GB 2025-10-09 10:53:21.529717 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250928 7a71d2c4db7b 10 days ago 1.01GB 2025-10-09 10:53:21.529728 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250928 952e80917358 10 days ago 1.01GB 2025-10-09 10:53:21.529738 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250928 aa257558c0e1 10 days ago 994MB 2025-10-09 10:53:21.529749 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250928 78cd0fd68728 10 days ago 1.26GB 2025-10-09 10:53:21.529760 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250928 a398532ca23c 10 days ago 1.15GB 2025-10-09 10:53:21.529771 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250928 8650662cbb69 10 days ago 992MB 2025-10-09 10:53:21.529782 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250928 d060ea1c8db8 10 days ago 992MB 2025-10-09 10:53:21.529793 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250928 de4c5060ba80 10 days ago 992MB 2025-10-09 10:53:21.529803 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250928 38748b1c0be1 10 days ago 992MB 2025-10-09 10:53:21.529814 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250928 e4497575f661 10 days ago 1GB 2025-10-09 10:53:21.529832 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250928 73d2a2d72b98 10 days ago 1.01GB 2025-10-09 10:53:21.529843 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250928 520bee583efe 10 days ago 1.01GB 2025-10-09 10:53:21.529854 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250928 c77705fc2372 10 days ago 1GB 2025-10-09 10:53:21.529864 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250928 424e4a700743 10 days ago 1GB 2025-10-09 10:53:21.529875 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250928 2367b93d4c2b 10 days ago 1GB 2025-10-09 10:53:21.529891 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20250928 7ceda8c5863a 10 days ago 994MB 2025-10-09 10:53:21.529902 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20250928 26c1cb1dfe1d 10 days ago 995MB 2025-10-09 10:53:21.529913 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 5 months ago 1.27GB 2025-10-09 10:53:21.905856 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-09 10:53:21.906590 | orchestrator | ++ semver 9.4.0 5.0.0 2025-10-09 10:53:21.965434 | orchestrator | 2025-10-09 10:53:21.965487 | orchestrator | ## Containers @ testbed-node-1 2025-10-09 10:53:21.965500 | orchestrator | 2025-10-09 10:53:21.965512 | orchestrator | + [[ 1 -eq -1 ]] 2025-10-09 10:53:21.965523 | orchestrator | + echo 2025-10-09 10:53:21.965534 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-10-09 10:53:21.965546 | orchestrator | + echo 2025-10-09 10:53:21.965558 | orchestrator | + osism container testbed-node-1 ps 2025-10-09 10:53:24.419851 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-09 10:53:24.419960 | orchestrator | 8b5a60c8cbbf registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-09 10:53:24.419976 | orchestrator | 04aebaf9b2e6 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-09 10:53:24.419987 | orchestrator | 4597226d7af6 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-10-09 10:53:24.419998 | orchestrator | 9795e2556cd9 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-10-09 10:53:24.420009 | orchestrator | 80b25c8f4337 registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-09 10:53:24.420020 | orchestrator | 526ae43ee067 registry.osism.tech/kolla/release/grafana:12.2.0.20250928 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-10-09 10:53:24.420031 | orchestrator | 949f6e37ad58 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-10-09 10:53:24.420080 | orchestrator | 76b1f4064fae registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-10-09 10:53:24.420091 | orchestrator | 196451c4f741 registry.osism.tech/kolla/release/placement-api:12.0.1.20250928 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-10-09 10:53:24.420125 | orchestrator | e3c18501c147 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-10-09 10:53:24.420138 | orchestrator | 010fd32d24af registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-10-09 10:53:24.420148 | orchestrator | 00eaa2b494e2 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-10-09 10:53:24.420160 | orchestrator | ee17c3457b0e registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-10-09 10:53:24.420171 | orchestrator | 0b0b6d0ed8d6 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-10-09 10:53:24.420181 | orchestrator | e6f176628b9e registry.osism.tech/kolla/release/designate-central:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-10-09 10:53:24.420192 | orchestrator | 5ec164a40ed6 registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-10-09 10:53:24.420226 | orchestrator | 909b684e1f79 registry.osism.tech/kolla/release/designate-api:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-10-09 10:53:24.420238 | orchestrator | adf1b7d4a9ee registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-10-09 10:53:24.420249 | orchestrator | 22340684d621 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) barbican_worker 2025-10-09 10:53:24.420281 | orchestrator | b3c78bce8170 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-10-09 10:53:24.420293 | orchestrator | 492178e01c39 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-10-09 10:53:24.420304 | orchestrator | bd920291fe09 registry.osism.tech/kolla/release/nova-api:30.1.1.20250928 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-10-09 10:53:24.420316 | orchestrator | c097627a3508 registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-10-09 10:53:24.420327 | orchestrator | 81f5ab50ef8a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-10-09 10:53:24.420340 | orchestrator | f3c1560f0434 registry.osism.tech/kolla/release/glance-api:29.0.1.20250928 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-10-09 10:53:24.420351 | orchestrator | f93217804bde registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-10-09 10:53:24.420362 | orchestrator | ff16833d1bc9 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-10-09 10:53:24.420380 | orchestrator | 1decd6ac5fdb registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-10-09 10:53:24.420391 | orchestrator | fea7d6cc4b73 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-10-09 10:53:24.420402 | orchestrator | af1845e82ed3 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-10-09 10:53:24.420413 | orchestrator | 99c0d35c92e5 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-10-09 10:53:24.420424 | orchestrator | 2a3e602d5f67 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-10-09 10:53:24.420435 | orchestrator | 72f2eb06ffb3 registry.osism.tech/kolla/release/keystone:26.0.1.20250928 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-10-09 10:53:24.420446 | orchestrator | 4604e8cb4551 registry.osism.tech/kolla/release/horizon:25.1.2.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-10-09 10:53:24.420457 | orchestrator | 1b473af8f56e registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-10-09 10:53:24.420468 | orchestrator | 464b0c34f98d registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-10-09 10:53:24.420479 | orchestrator | 670ce58f1072 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-10-09 10:53:24.420490 | orchestrator | 85cc9ca0c1a5 registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-10-09 10:53:24.420501 | orchestrator | 3b1eebb275e8 registry.osism.tech/kolla/release/opensearch:2.19.3.20250928 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-10-09 10:53:24.420517 | orchestrator | 7943e1696bf3 registry.osism.tech/kolla/release/keepalived:2.2.8.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-10-09 10:53:24.420536 | orchestrator | 1353a862a1c2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-10-09 10:53:24.420547 | orchestrator | 7fdb1610a5ef registry.osism.tech/kolla/release/proxysql:2.7.3.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-10-09 10:53:24.420558 | orchestrator | 62f8d77c2285 registry.osism.tech/kolla/release/haproxy:2.8.5.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-10-09 10:53:24.420706 | orchestrator | 56525823e744 registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-10-09 10:53:24.420796 | orchestrator | bde68ef086d8 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-10-09 10:53:24.420810 | orchestrator | f701d6fbfbba registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-10-09 10:53:24.420844 | orchestrator | 59d1500d7562 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-10-09 10:53:24.420855 | orchestrator | e45eba6631c4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-10-09 10:53:24.420866 | orchestrator | 2287bff1acea registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-09 10:53:24.420875 | orchestrator | d5d4732a740e registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-09 10:53:24.420885 | orchestrator | ea9f152583d5 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-10-09 10:53:24.420895 | orchestrator | 9a3fe33c1bfb registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-10-09 10:53:24.420905 | orchestrator | e5f60ad6b466 registry.osism.tech/kolla/release/redis:7.0.15.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-09 10:53:24.420915 | orchestrator | b4c58974ce73 registry.osism.tech/kolla/release/memcached:1.6.24.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-09 10:53:24.420925 | orchestrator | 943dd8eba2c4 registry.osism.tech/kolla/release/cron:3.0.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-09 10:53:24.420935 | orchestrator | 5d6d75ff5b99 registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-10-09 10:53:24.420944 | orchestrator | 22ed8894e50c registry.osism.tech/kolla/release/fluentd:5.0.7.20250928 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-09 10:53:24.753965 | orchestrator | 2025-10-09 10:53:24.754150 | orchestrator | ## Images @ testbed-node-1 2025-10-09 10:53:24.754167 | orchestrator | 2025-10-09 10:53:24.754179 | orchestrator | + echo 2025-10-09 10:53:24.754191 | orchestrator | + echo '## Images @ testbed-node-1' 2025-10-09 10:53:24.754204 | orchestrator | + echo 2025-10-09 10:53:24.754215 | orchestrator | + osism container testbed-node-1 images 2025-10-09 10:53:27.220922 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-09 10:53:27.221091 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.5.20250928 5950a9bcfecc 10 days ago 292MB 2025-10-09 10:53:27.221110 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250928 71703156dd3c 10 days ago 383MB 2025-10-09 10:53:27.221122 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250928 c94bae3d79ab 10 days ago 283MB 2025-10-09 10:53:27.221133 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.3.20250928 adfda0ced572 10 days ago 1.55GB 2025-10-09 10:53:27.221145 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.3.20250928 aa31eec89994 10 days ago 1.53GB 2025-10-09 10:53:27.221156 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250928 e844d8195d98 10 days ago 340MB 2025-10-09 10:53:27.221166 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.6.0.20250928 0f1322ccc632 10 days ago 686MB 2025-10-09 10:53:27.221199 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20250928 95d1e32ddd75 10 days ago 294MB 2025-10-09 10:53:27.221211 | orchestrator | registry.osism.tech/kolla/release/grafana 12.2.0.20250928 2a5ec5c4b075 10 days ago 1.02GB 2025-10-09 10:53:27.221222 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250928 0d3a7fd647f6 10 days ago 597MB 2025-10-09 10:53:27.221233 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20250928 7ffb11314943 10 days ago 284MB 2025-10-09 10:53:27.221261 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.0.20250928 f0c00933987a 10 days ago 300MB 2025-10-09 10:53:27.221273 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.0.20250928 2874b2f5a67e 10 days ago 300MB 2025-10-09 10:53:27.221284 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.14.20250928 e17a6d3f86d1 10 days ago 465MB 2025-10-09 10:53:27.221295 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250928 b7caaf91c86f 10 days ago 291MB 2025-10-09 10:53:27.221306 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250928 98cf8ae628d2 10 days ago 291MB 2025-10-09 10:53:27.221316 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20250928 b2ebbbc6f157 10 days ago 1.16GB 2025-10-09 10:53:27.221327 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250928 d7156b8d95d0 10 days ago 319MB 2025-10-09 10:53:27.221338 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250928 8f8778c6faa4 10 days ago 316MB 2025-10-09 10:53:27.221349 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250928 af58545f0010 10 days ago 310MB 2025-10-09 10:53:27.221359 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250928 ca935c8cced7 10 days ago 375MB 2025-10-09 10:53:27.221376 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250928 7e93e232f7b3 10 days ago 323MB 2025-10-09 10:53:27.221387 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.0.20250928 fc232d670c5e 10 days ago 307MB 2025-10-09 10:53:27.221397 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.0.20250928 d2b8f6f9b5fb 10 days ago 307MB 2025-10-09 10:53:27.221408 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.0.20250928 4bab05b921ed 10 days ago 307MB 2025-10-09 10:53:27.221419 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.0.20250928 a8971421e2d8 10 days ago 307MB 2025-10-09 10:53:27.221430 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20250928 ea71d0b5d323 10 days ago 1.07GB 2025-10-09 10:53:27.221441 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20250928 d1aa7da3362f 10 days ago 1.07GB 2025-10-09 10:53:27.221451 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20250928 e92e277e4084 10 days ago 1.05GB 2025-10-09 10:53:27.221464 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20250928 190c13e67783 10 days ago 1.05GB 2025-10-09 10:53:27.221476 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20250928 2e69b9b4f213 10 days ago 1.05GB 2025-10-09 10:53:27.221506 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250928 e2639eaace7b 10 days ago 1.1GB 2025-10-09 10:53:27.221520 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250928 3f2655c452c5 10 days ago 1.06GB 2025-10-09 10:53:27.221539 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250928 79b8bb6b6e8e 10 days ago 1.06GB 2025-10-09 10:53:27.221552 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250928 c35fa0908857 10 days ago 1.12GB 2025-10-09 10:53:27.221564 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250928 c4472b2a9533 10 days ago 1.18GB 2025-10-09 10:53:27.221576 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250928 7e385626ede5 10 days ago 1.42GB 2025-10-09 10:53:27.221589 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250928 c30e3eb83eaa 10 days ago 1.42GB 2025-10-09 10:53:27.221600 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.1.1.20250928 a154fbd594b5 10 days ago 1.22GB 2025-10-09 10:53:27.221613 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.1.1.20250928 c0d1ffbe7d0e 10 days ago 1.22GB 2025-10-09 10:53:27.221626 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.1.1.20250928 74c49b646ba9 10 days ago 1.22GB 2025-10-09 10:53:27.221638 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.1.1.20250928 2e8217e0d99d 10 days ago 1.38GB 2025-10-09 10:53:27.221650 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250928 613ca8e781c5 10 days ago 1.01GB 2025-10-09 10:53:27.221663 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250928 7a71d2c4db7b 10 days ago 1.01GB 2025-10-09 10:53:27.221675 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250928 952e80917358 10 days ago 1.01GB 2025-10-09 10:53:27.221687 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250928 aa257558c0e1 10 days ago 994MB 2025-10-09 10:53:27.221699 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250928 78cd0fd68728 10 days ago 1.26GB 2025-10-09 10:53:27.221711 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250928 a398532ca23c 10 days ago 1.15GB 2025-10-09 10:53:27.221724 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250928 e4497575f661 10 days ago 1GB 2025-10-09 10:53:27.221736 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250928 73d2a2d72b98 10 days ago 1.01GB 2025-10-09 10:53:27.221748 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250928 520bee583efe 10 days ago 1.01GB 2025-10-09 10:53:27.221761 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250928 c77705fc2372 10 days ago 1GB 2025-10-09 10:53:27.221773 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250928 424e4a700743 10 days ago 1GB 2025-10-09 10:53:27.221785 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250928 2367b93d4c2b 10 days ago 1GB 2025-10-09 10:53:27.221798 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 5 months ago 1.27GB 2025-10-09 10:53:27.571490 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-09 10:53:27.571682 | orchestrator | ++ semver 9.4.0 5.0.0 2025-10-09 10:53:27.621709 | orchestrator | 2025-10-09 10:53:27.621782 | orchestrator | ## Containers @ testbed-node-2 2025-10-09 10:53:27.621796 | orchestrator | 2025-10-09 10:53:27.621807 | orchestrator | + [[ 1 -eq -1 ]] 2025-10-09 10:53:27.621819 | orchestrator | + echo 2025-10-09 10:53:27.621858 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-10-09 10:53:27.621879 | orchestrator | + echo 2025-10-09 10:53:27.621898 | orchestrator | + osism container testbed-node-2 ps 2025-10-09 10:53:30.118623 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-09 10:53:30.118742 | orchestrator | dba3ed791a7e registry.osism.tech/kolla/release/octavia-worker:15.0.2.20250928 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-09 10:53:30.118760 | orchestrator | 9499e19a9d97 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-09 10:53:30.118772 | orchestrator | 5f93bb393700 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-10-09 10:53:30.118783 | orchestrator | ab58473c222c registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-10-09 10:53:30.118794 | orchestrator | 032176f7e76c registry.osism.tech/kolla/release/octavia-api:15.0.2.20250928 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-09 10:53:30.118805 | orchestrator | 235c6b196905 registry.osism.tech/kolla/release/grafana:12.2.0.20250928 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-10-09 10:53:30.118816 | orchestrator | b36e4a81e3a4 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250928 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-10-09 10:53:30.118826 | orchestrator | 551f03593cae registry.osism.tech/kolla/release/magnum-api:19.0.1.20250928 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-10-09 10:53:30.118837 | orchestrator | ddd88d32ba01 registry.osism.tech/kolla/release/placement-api:12.0.1.20250928 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-10-09 10:53:30.118847 | orchestrator | 10b03e4c896f registry.osism.tech/kolla/release/designate-worker:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-10-09 10:53:30.118858 | orchestrator | 65a639ad8f9a registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-10-09 10:53:30.118868 | orchestrator | 062e19f73674 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-10-09 10:53:30.118879 | orchestrator | 093be5d3b44a registry.osism.tech/kolla/release/nova-novncproxy:30.1.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-10-09 10:53:30.118890 | orchestrator | 84ae800112f9 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-10-09 10:53:30.118901 | orchestrator | c0beb44837cd registry.osism.tech/kolla/release/designate-central:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-10-09 10:53:30.118911 | orchestrator | bd224cc171f5 registry.osism.tech/kolla/release/nova-conductor:30.1.1.20250928 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-10-09 10:53:30.118922 | orchestrator | 4f5f092e19ff registry.osism.tech/kolla/release/designate-api:19.0.1.20250928 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-10-09 10:53:30.118933 | orchestrator | dd3e457957e3 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250928 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-10-09 10:53:30.118949 | orchestrator | 8a015517c505 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250928 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-10-09 10:53:30.118977 | orchestrator | 95fb19e3d30c registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250928 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-10-09 10:53:30.118989 | orchestrator | eba7a81a4031 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250928 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-10-09 10:53:30.119000 | orchestrator | 5776748eeb33 registry.osism.tech/kolla/release/nova-api:30.1.1.20250928 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-10-09 10:53:30.119010 | orchestrator | e097d649f2eb registry.osism.tech/kolla/release/nova-scheduler:30.1.1.20250928 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-10-09 10:53:30.119021 | orchestrator | 6ea4ae0b50d9 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250928 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-10-09 10:53:30.119062 | orchestrator | e81cf1e5b594 registry.osism.tech/kolla/release/glance-api:29.0.1.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-10-09 10:53:30.119073 | orchestrator | dd02a2019921 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-10-09 10:53:30.119085 | orchestrator | 461f668c4749 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-10-09 10:53:30.119096 | orchestrator | 843a24187b32 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-10-09 10:53:30.119107 | orchestrator | dbe3f4d3469b registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-10-09 10:53:30.119118 | orchestrator | 1057652cd161 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-10-09 10:53:30.119137 | orchestrator | ecafa83415fe registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-10-09 10:53:30.119149 | orchestrator | 81e08449295a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-10-09 10:53:30.119160 | orchestrator | 26643400bfef registry.osism.tech/kolla/release/keystone:26.0.1.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-10-09 10:53:30.119173 | orchestrator | 5bb00c0d21f6 registry.osism.tech/kolla/release/horizon:25.1.2.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-10-09 10:53:30.119185 | orchestrator | 1902ee380666 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-10-09 10:53:30.119198 | orchestrator | 14639ed76c02 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250928 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-10-09 10:53:30.119217 | orchestrator | ad8eef36fec4 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.3.20250928 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-10-09 10:53:30.119229 | orchestrator | a4b8a77e8d63 registry.osism.tech/kolla/release/mariadb-server:10.11.14.20250928 "dumb-init -- kolla_…" 22 minutes ago Up 21 minutes (healthy) mariadb 2025-10-09 10:53:30.119245 | orchestrator | d928d6764cdd registry.osism.tech/kolla/release/opensearch:2.19.3.20250928 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-10-09 10:53:30.119259 | orchestrator | 373be41cfa6b registry.osism.tech/kolla/release/keepalived:2.2.8.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-10-09 10:53:30.119280 | orchestrator | 5b39f3b800bf registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-10-09 10:53:30.119293 | orchestrator | 12fcbeb1a096 registry.osism.tech/kolla/release/proxysql:2.7.3.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-10-09 10:53:30.119305 | orchestrator | 0682b11eca4d registry.osism.tech/kolla/release/haproxy:2.8.5.20250928 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-10-09 10:53:30.119317 | orchestrator | 371e2a39a7b0 registry.osism.tech/kolla/release/ovn-northd:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-10-09 10:53:30.119329 | orchestrator | 9de4497095b2 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-10-09 10:53:30.119342 | orchestrator | 0539cbc3ff12 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.0.20250928 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-10-09 10:53:30.119354 | orchestrator | af3ea0f1cff1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-10-09 10:53:30.119366 | orchestrator | 054e8d619cb7 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250928 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-10-09 10:53:30.119379 | orchestrator | d5f74d89d3de registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-09 10:53:30.119391 | orchestrator | ab73b540e2c8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-09 10:53:30.119403 | orchestrator | 2b7ee289aaee registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-10-09 10:53:30.119415 | orchestrator | 0de0d0210203 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-10-09 10:53:30.119428 | orchestrator | 6caf9fccb573 registry.osism.tech/kolla/release/redis:7.0.15.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-09 10:53:30.119440 | orchestrator | 2b9a0a874b2f registry.osism.tech/kolla/release/memcached:1.6.24.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-09 10:53:30.119452 | orchestrator | 51f2a054d2d1 registry.osism.tech/kolla/release/cron:3.0.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-09 10:53:30.119470 | orchestrator | ecfb55935405 registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-10-09 10:53:30.119483 | orchestrator | 5b77b52d0969 registry.osism.tech/kolla/release/fluentd:5.0.7.20250928 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-09 10:53:30.512875 | orchestrator | 2025-10-09 10:53:30.512964 | orchestrator | ## Images @ testbed-node-2 2025-10-09 10:53:30.512978 | orchestrator | 2025-10-09 10:53:30.512989 | orchestrator | + echo 2025-10-09 10:53:30.513001 | orchestrator | + echo '## Images @ testbed-node-2' 2025-10-09 10:53:30.513013 | orchestrator | + echo 2025-10-09 10:53:30.513024 | orchestrator | + osism container testbed-node-2 images 2025-10-09 10:53:32.979840 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-09 10:53:32.979941 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.5.20250928 5950a9bcfecc 10 days ago 292MB 2025-10-09 10:53:32.979955 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250928 71703156dd3c 10 days ago 383MB 2025-10-09 10:53:32.979967 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250928 c94bae3d79ab 10 days ago 283MB 2025-10-09 10:53:32.979978 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.3.20250928 adfda0ced572 10 days ago 1.55GB 2025-10-09 10:53:32.979989 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.3.20250928 aa31eec89994 10 days ago 1.53GB 2025-10-09 10:53:32.980000 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250928 e844d8195d98 10 days ago 340MB 2025-10-09 10:53:32.980010 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.6.0.20250928 0f1322ccc632 10 days ago 686MB 2025-10-09 10:53:32.980021 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20250928 95d1e32ddd75 10 days ago 294MB 2025-10-09 10:53:32.980060 | orchestrator | registry.osism.tech/kolla/release/grafana 12.2.0.20250928 2a5ec5c4b075 10 days ago 1.02GB 2025-10-09 10:53:32.980072 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250928 0d3a7fd647f6 10 days ago 597MB 2025-10-09 10:53:32.980082 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20250928 7ffb11314943 10 days ago 284MB 2025-10-09 10:53:32.980093 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.0.20250928 f0c00933987a 10 days ago 300MB 2025-10-09 10:53:32.980104 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.0.20250928 2874b2f5a67e 10 days ago 300MB 2025-10-09 10:53:32.980114 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.14.20250928 e17a6d3f86d1 10 days ago 465MB 2025-10-09 10:53:32.980143 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250928 b7caaf91c86f 10 days ago 291MB 2025-10-09 10:53:32.980155 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250928 98cf8ae628d2 10 days ago 291MB 2025-10-09 10:53:32.980166 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20250928 b2ebbbc6f157 10 days ago 1.16GB 2025-10-09 10:53:32.980176 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250928 d7156b8d95d0 10 days ago 319MB 2025-10-09 10:53:32.980187 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250928 8f8778c6faa4 10 days ago 316MB 2025-10-09 10:53:32.980198 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250928 af58545f0010 10 days ago 310MB 2025-10-09 10:53:32.980229 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250928 ca935c8cced7 10 days ago 375MB 2025-10-09 10:53:32.980240 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250928 7e93e232f7b3 10 days ago 323MB 2025-10-09 10:53:32.980251 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.0.20250928 fc232d670c5e 10 days ago 307MB 2025-10-09 10:53:32.980261 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.0.20250928 d2b8f6f9b5fb 10 days ago 307MB 2025-10-09 10:53:32.980272 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.0.20250928 4bab05b921ed 10 days ago 307MB 2025-10-09 10:53:32.980283 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.0.20250928 a8971421e2d8 10 days ago 307MB 2025-10-09 10:53:32.980294 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20250928 ea71d0b5d323 10 days ago 1.07GB 2025-10-09 10:53:32.980304 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20250928 d1aa7da3362f 10 days ago 1.07GB 2025-10-09 10:53:32.980315 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20250928 e92e277e4084 10 days ago 1.05GB 2025-10-09 10:53:32.980326 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20250928 190c13e67783 10 days ago 1.05GB 2025-10-09 10:53:32.980336 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20250928 2e69b9b4f213 10 days ago 1.05GB 2025-10-09 10:53:32.980363 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250928 e2639eaace7b 10 days ago 1.1GB 2025-10-09 10:53:32.980374 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250928 3f2655c452c5 10 days ago 1.06GB 2025-10-09 10:53:32.980386 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250928 79b8bb6b6e8e 10 days ago 1.06GB 2025-10-09 10:53:32.980404 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250928 c35fa0908857 10 days ago 1.12GB 2025-10-09 10:53:32.980417 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250928 c4472b2a9533 10 days ago 1.18GB 2025-10-09 10:53:32.980429 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250928 7e385626ede5 10 days ago 1.42GB 2025-10-09 10:53:32.980441 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250928 c30e3eb83eaa 10 days ago 1.42GB 2025-10-09 10:53:32.980453 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.1.1.20250928 a154fbd594b5 10 days ago 1.22GB 2025-10-09 10:53:32.980465 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.1.1.20250928 c0d1ffbe7d0e 10 days ago 1.22GB 2025-10-09 10:53:32.980477 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.1.1.20250928 74c49b646ba9 10 days ago 1.22GB 2025-10-09 10:53:32.980489 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.1.1.20250928 2e8217e0d99d 10 days ago 1.38GB 2025-10-09 10:53:32.980500 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250928 613ca8e781c5 10 days ago 1.01GB 2025-10-09 10:53:32.980512 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250928 7a71d2c4db7b 10 days ago 1.01GB 2025-10-09 10:53:32.980524 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250928 952e80917358 10 days ago 1.01GB 2025-10-09 10:53:32.980537 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250928 aa257558c0e1 10 days ago 994MB 2025-10-09 10:53:32.980555 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250928 78cd0fd68728 10 days ago 1.26GB 2025-10-09 10:53:32.980568 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250928 a398532ca23c 10 days ago 1.15GB 2025-10-09 10:53:32.980580 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250928 e4497575f661 10 days ago 1GB 2025-10-09 10:53:32.980592 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250928 73d2a2d72b98 10 days ago 1.01GB 2025-10-09 10:53:32.980604 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250928 520bee583efe 10 days ago 1.01GB 2025-10-09 10:53:32.980616 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250928 c77705fc2372 10 days ago 1GB 2025-10-09 10:53:32.980628 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250928 424e4a700743 10 days ago 1GB 2025-10-09 10:53:32.980640 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250928 2367b93d4c2b 10 days ago 1GB 2025-10-09 10:53:32.980652 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 5 months ago 1.27GB 2025-10-09 10:53:33.315374 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-10-09 10:53:33.322884 | orchestrator | + set -e 2025-10-09 10:53:33.322910 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 10:53:33.324213 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 10:53:33.324236 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 10:53:33.324247 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 10:53:33.324258 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 10:53:33.324269 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 10:53:33.324280 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 10:53:33.324291 | orchestrator | ++ export MANAGER_VERSION=9.4.0 2025-10-09 10:53:33.324302 | orchestrator | ++ MANAGER_VERSION=9.4.0 2025-10-09 10:53:33.324312 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 10:53:33.324323 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 10:53:33.324334 | orchestrator | ++ export ARA=false 2025-10-09 10:53:33.324344 | orchestrator | ++ ARA=false 2025-10-09 10:53:33.324355 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 10:53:33.324365 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 10:53:33.324376 | orchestrator | ++ export TEMPEST=false 2025-10-09 10:53:33.324386 | orchestrator | ++ TEMPEST=false 2025-10-09 10:53:33.324397 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 10:53:33.324412 | orchestrator | ++ IS_ZUUL=true 2025-10-09 10:53:33.324423 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 10:53:33.324435 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 10:53:33.324446 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 10:53:33.324457 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 10:53:33.324467 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 10:53:33.324478 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 10:53:33.324489 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 10:53:33.324500 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 10:53:33.324510 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 10:53:33.324521 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 10:53:33.324532 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-09 10:53:33.324543 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-10-09 10:53:33.331862 | orchestrator | + set -e 2025-10-09 10:53:33.331884 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 10:53:33.331895 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 10:53:33.331906 | orchestrator | ++ INTERACTIVE=false 2025-10-09 10:53:33.331917 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 10:53:33.331927 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 10:53:33.331938 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-09 10:53:33.332732 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-09 10:53:33.340016 | orchestrator | 2025-10-09 10:53:33.340075 | orchestrator | # Ceph status 2025-10-09 10:53:33.340087 | orchestrator | 2025-10-09 10:53:33.340098 | orchestrator | ++ export MANAGER_VERSION=9.4.0 2025-10-09 10:53:33.340109 | orchestrator | ++ MANAGER_VERSION=9.4.0 2025-10-09 10:53:33.340139 | orchestrator | + echo 2025-10-09 10:53:33.340150 | orchestrator | + echo '# Ceph status' 2025-10-09 10:53:33.340161 | orchestrator | + echo 2025-10-09 10:53:33.340172 | orchestrator | + ceph -s 2025-10-09 10:53:33.958132 | orchestrator | cluster: 2025-10-09 10:53:33.958220 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-10-09 10:53:33.958233 | orchestrator | health: HEALTH_OK 2025-10-09 10:53:33.958245 | orchestrator | 2025-10-09 10:53:33.958256 | orchestrator | services: 2025-10-09 10:53:33.958268 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-10-09 10:53:33.958292 | orchestrator | mgr: testbed-node-2(active, since 16m), standbys: testbed-node-1, testbed-node-0 2025-10-09 10:53:33.958303 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-10-09 10:53:33.958315 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-10-09 10:53:33.958326 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-10-09 10:53:33.958337 | orchestrator | 2025-10-09 10:53:33.958348 | orchestrator | data: 2025-10-09 10:53:33.958359 | orchestrator | volumes: 1/1 healthy 2025-10-09 10:53:33.958370 | orchestrator | pools: 14 pools, 401 pgs 2025-10-09 10:53:33.958381 | orchestrator | objects: 522 objects, 2.2 GiB 2025-10-09 10:53:33.958392 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-10-09 10:53:33.958403 | orchestrator | pgs: 401 active+clean 2025-10-09 10:53:33.958414 | orchestrator | 2025-10-09 10:53:34.021530 | orchestrator | 2025-10-09 10:53:34.021614 | orchestrator | # Ceph versions 2025-10-09 10:53:34.021629 | orchestrator | 2025-10-09 10:53:34.021640 | orchestrator | + echo 2025-10-09 10:53:34.021650 | orchestrator | + echo '# Ceph versions' 2025-10-09 10:53:34.021661 | orchestrator | + echo 2025-10-09 10:53:34.021671 | orchestrator | + ceph versions 2025-10-09 10:53:34.617102 | orchestrator | { 2025-10-09 10:53:34.617196 | orchestrator | "mon": { 2025-10-09 10:53:34.617210 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-09 10:53:34.617223 | orchestrator | }, 2025-10-09 10:53:34.617234 | orchestrator | "mgr": { 2025-10-09 10:53:34.617245 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-09 10:53:34.617256 | orchestrator | }, 2025-10-09 10:53:34.617268 | orchestrator | "osd": { 2025-10-09 10:53:34.617279 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-10-09 10:53:34.617290 | orchestrator | }, 2025-10-09 10:53:34.617301 | orchestrator | "mds": { 2025-10-09 10:53:34.617312 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-09 10:53:34.617323 | orchestrator | }, 2025-10-09 10:53:34.617334 | orchestrator | "rgw": { 2025-10-09 10:53:34.617345 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-09 10:53:34.617355 | orchestrator | }, 2025-10-09 10:53:34.617366 | orchestrator | "overall": { 2025-10-09 10:53:34.617378 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-10-09 10:53:34.617389 | orchestrator | } 2025-10-09 10:53:34.617400 | orchestrator | } 2025-10-09 10:53:34.660745 | orchestrator | 2025-10-09 10:53:34.660774 | orchestrator | # Ceph OSD tree 2025-10-09 10:53:34.660786 | orchestrator | 2025-10-09 10:53:34.660797 | orchestrator | + echo 2025-10-09 10:53:34.660808 | orchestrator | + echo '# Ceph OSD tree' 2025-10-09 10:53:34.660820 | orchestrator | + echo 2025-10-09 10:53:34.660831 | orchestrator | + ceph osd df tree 2025-10-09 10:53:35.221749 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-10-09 10:53:35.221886 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-10-09 10:53:35.221903 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-10-09 10:53:35.221916 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.63 1.29 204 up osd.1 2025-10-09 10:53:35.221928 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 860 MiB 787 MiB 1 KiB 74 MiB 19 GiB 4.20 0.71 186 up osd.4 2025-10-09 10:53:35.221939 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-10-09 10:53:35.221950 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.73 0.97 174 up osd.0 2025-10-09 10:53:35.221983 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.10 1.03 218 up osd.3 2025-10-09 10:53:35.221994 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-10-09 10:53:35.222005 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.80 1.15 191 up osd.2 2025-10-09 10:53:35.222107 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 955 MiB 1 KiB 74 MiB 19 GiB 5.03 0.85 197 up osd.5 2025-10-09 10:53:35.222121 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-10-09 10:53:35.222133 | orchestrator | MIN/MAX VAR: 0.71/1.29 STDDEV: 1.12 2025-10-09 10:53:35.269759 | orchestrator | 2025-10-09 10:53:35.269823 | orchestrator | # Ceph monitor status 2025-10-09 10:53:35.269837 | orchestrator | 2025-10-09 10:53:35.269849 | orchestrator | + echo 2025-10-09 10:53:35.269861 | orchestrator | + echo '# Ceph monitor status' 2025-10-09 10:53:35.269872 | orchestrator | + echo 2025-10-09 10:53:35.269883 | orchestrator | + ceph mon stat 2025-10-09 10:53:35.856662 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-10-09 10:53:35.901709 | orchestrator | 2025-10-09 10:53:35.901785 | orchestrator | # Ceph quorum status 2025-10-09 10:53:35.901799 | orchestrator | 2025-10-09 10:53:35.901811 | orchestrator | + echo 2025-10-09 10:53:35.901822 | orchestrator | + echo '# Ceph quorum status' 2025-10-09 10:53:35.901833 | orchestrator | + echo 2025-10-09 10:53:35.902423 | orchestrator | + ceph quorum_status 2025-10-09 10:53:35.902517 | orchestrator | + jq 2025-10-09 10:53:36.559510 | orchestrator | { 2025-10-09 10:53:36.559609 | orchestrator | "election_epoch": 4, 2025-10-09 10:53:36.559624 | orchestrator | "quorum": [ 2025-10-09 10:53:36.559636 | orchestrator | 0, 2025-10-09 10:53:36.559647 | orchestrator | 1, 2025-10-09 10:53:36.559658 | orchestrator | 2 2025-10-09 10:53:36.559669 | orchestrator | ], 2025-10-09 10:53:36.559680 | orchestrator | "quorum_names": [ 2025-10-09 10:53:36.559691 | orchestrator | "testbed-node-0", 2025-10-09 10:53:36.559701 | orchestrator | "testbed-node-1", 2025-10-09 10:53:36.559712 | orchestrator | "testbed-node-2" 2025-10-09 10:53:36.559723 | orchestrator | ], 2025-10-09 10:53:36.559734 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-10-09 10:53:36.559746 | orchestrator | "quorum_age": 1748, 2025-10-09 10:53:36.559757 | orchestrator | "features": { 2025-10-09 10:53:36.559768 | orchestrator | "quorum_con": "4540138322906710015", 2025-10-09 10:53:36.559779 | orchestrator | "quorum_mon": [ 2025-10-09 10:53:36.559789 | orchestrator | "kraken", 2025-10-09 10:53:36.559800 | orchestrator | "luminous", 2025-10-09 10:53:36.559811 | orchestrator | "mimic", 2025-10-09 10:53:36.559822 | orchestrator | "osdmap-prune", 2025-10-09 10:53:36.559833 | orchestrator | "nautilus", 2025-10-09 10:53:36.559843 | orchestrator | "octopus", 2025-10-09 10:53:36.559854 | orchestrator | "pacific", 2025-10-09 10:53:36.559865 | orchestrator | "elector-pinging", 2025-10-09 10:53:36.559875 | orchestrator | "quincy", 2025-10-09 10:53:36.559886 | orchestrator | "reef" 2025-10-09 10:53:36.559898 | orchestrator | ] 2025-10-09 10:53:36.559909 | orchestrator | }, 2025-10-09 10:53:36.559919 | orchestrator | "monmap": { 2025-10-09 10:53:36.559930 | orchestrator | "epoch": 1, 2025-10-09 10:53:36.559941 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-10-09 10:53:36.560072 | orchestrator | "modified": "2025-10-09T10:24:15.587342Z", 2025-10-09 10:53:36.560091 | orchestrator | "created": "2025-10-09T10:24:15.587342Z", 2025-10-09 10:53:36.560103 | orchestrator | "min_mon_release": 18, 2025-10-09 10:53:36.560134 | orchestrator | "min_mon_release_name": "reef", 2025-10-09 10:53:36.560147 | orchestrator | "election_strategy": 1, 2025-10-09 10:53:36.560159 | orchestrator | "disallowed_leaders: ": "", 2025-10-09 10:53:36.560171 | orchestrator | "stretch_mode": false, 2025-10-09 10:53:36.560183 | orchestrator | "tiebreaker_mon": "", 2025-10-09 10:53:36.560195 | orchestrator | "removed_ranks: ": "", 2025-10-09 10:53:36.560207 | orchestrator | "features": { 2025-10-09 10:53:36.560219 | orchestrator | "persistent": [ 2025-10-09 10:53:36.560253 | orchestrator | "kraken", 2025-10-09 10:53:36.560265 | orchestrator | "luminous", 2025-10-09 10:53:36.560277 | orchestrator | "mimic", 2025-10-09 10:53:36.560289 | orchestrator | "osdmap-prune", 2025-10-09 10:53:36.560300 | orchestrator | "nautilus", 2025-10-09 10:53:36.560312 | orchestrator | "octopus", 2025-10-09 10:53:36.560324 | orchestrator | "pacific", 2025-10-09 10:53:36.560336 | orchestrator | "elector-pinging", 2025-10-09 10:53:36.560349 | orchestrator | "quincy", 2025-10-09 10:53:36.560361 | orchestrator | "reef" 2025-10-09 10:53:36.560373 | orchestrator | ], 2025-10-09 10:53:36.560385 | orchestrator | "optional": [] 2025-10-09 10:53:36.560397 | orchestrator | }, 2025-10-09 10:53:36.560410 | orchestrator | "mons": [ 2025-10-09 10:53:36.560421 | orchestrator | { 2025-10-09 10:53:36.560432 | orchestrator | "rank": 0, 2025-10-09 10:53:36.560444 | orchestrator | "name": "testbed-node-0", 2025-10-09 10:53:36.560454 | orchestrator | "public_addrs": { 2025-10-09 10:53:36.560465 | orchestrator | "addrvec": [ 2025-10-09 10:53:36.560475 | orchestrator | { 2025-10-09 10:53:36.560486 | orchestrator | "type": "v2", 2025-10-09 10:53:36.560497 | orchestrator | "addr": "192.168.16.10:3300", 2025-10-09 10:53:36.560508 | orchestrator | "nonce": 0 2025-10-09 10:53:36.560518 | orchestrator | }, 2025-10-09 10:53:36.560529 | orchestrator | { 2025-10-09 10:53:36.560540 | orchestrator | "type": "v1", 2025-10-09 10:53:36.560550 | orchestrator | "addr": "192.168.16.10:6789", 2025-10-09 10:53:36.560561 | orchestrator | "nonce": 0 2025-10-09 10:53:36.560572 | orchestrator | } 2025-10-09 10:53:36.560582 | orchestrator | ] 2025-10-09 10:53:36.560593 | orchestrator | }, 2025-10-09 10:53:36.560604 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-10-09 10:53:36.560614 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-10-09 10:53:36.560625 | orchestrator | "priority": 0, 2025-10-09 10:53:36.560636 | orchestrator | "weight": 0, 2025-10-09 10:53:36.560646 | orchestrator | "crush_location": "{}" 2025-10-09 10:53:36.560657 | orchestrator | }, 2025-10-09 10:53:36.560667 | orchestrator | { 2025-10-09 10:53:36.560678 | orchestrator | "rank": 1, 2025-10-09 10:53:36.560688 | orchestrator | "name": "testbed-node-1", 2025-10-09 10:53:36.560699 | orchestrator | "public_addrs": { 2025-10-09 10:53:36.560710 | orchestrator | "addrvec": [ 2025-10-09 10:53:36.560720 | orchestrator | { 2025-10-09 10:53:36.560731 | orchestrator | "type": "v2", 2025-10-09 10:53:36.560742 | orchestrator | "addr": "192.168.16.11:3300", 2025-10-09 10:53:36.560752 | orchestrator | "nonce": 0 2025-10-09 10:53:36.560763 | orchestrator | }, 2025-10-09 10:53:36.560774 | orchestrator | { 2025-10-09 10:53:36.560784 | orchestrator | "type": "v1", 2025-10-09 10:53:36.560795 | orchestrator | "addr": "192.168.16.11:6789", 2025-10-09 10:53:36.560806 | orchestrator | "nonce": 0 2025-10-09 10:53:36.560816 | orchestrator | } 2025-10-09 10:53:36.560827 | orchestrator | ] 2025-10-09 10:53:36.560837 | orchestrator | }, 2025-10-09 10:53:36.560848 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-10-09 10:53:36.560859 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-10-09 10:53:36.560870 | orchestrator | "priority": 0, 2025-10-09 10:53:36.560880 | orchestrator | "weight": 0, 2025-10-09 10:53:36.560891 | orchestrator | "crush_location": "{}" 2025-10-09 10:53:36.560901 | orchestrator | }, 2025-10-09 10:53:36.560912 | orchestrator | { 2025-10-09 10:53:36.560922 | orchestrator | "rank": 2, 2025-10-09 10:53:36.560933 | orchestrator | "name": "testbed-node-2", 2025-10-09 10:53:36.560944 | orchestrator | "public_addrs": { 2025-10-09 10:53:36.560954 | orchestrator | "addrvec": [ 2025-10-09 10:53:36.560965 | orchestrator | { 2025-10-09 10:53:36.560976 | orchestrator | "type": "v2", 2025-10-09 10:53:36.560986 | orchestrator | "addr": "192.168.16.12:3300", 2025-10-09 10:53:36.560997 | orchestrator | "nonce": 0 2025-10-09 10:53:36.561008 | orchestrator | }, 2025-10-09 10:53:36.561018 | orchestrator | { 2025-10-09 10:53:36.561029 | orchestrator | "type": "v1", 2025-10-09 10:53:36.561059 | orchestrator | "addr": "192.168.16.12:6789", 2025-10-09 10:53:36.561071 | orchestrator | "nonce": 0 2025-10-09 10:53:36.561082 | orchestrator | } 2025-10-09 10:53:36.561092 | orchestrator | ] 2025-10-09 10:53:36.561103 | orchestrator | }, 2025-10-09 10:53:36.561114 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-10-09 10:53:36.561125 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-10-09 10:53:36.561143 | orchestrator | "priority": 0, 2025-10-09 10:53:36.561154 | orchestrator | "weight": 0, 2025-10-09 10:53:36.561165 | orchestrator | "crush_location": "{}" 2025-10-09 10:53:36.561176 | orchestrator | } 2025-10-09 10:53:36.561187 | orchestrator | ] 2025-10-09 10:53:36.561198 | orchestrator | } 2025-10-09 10:53:36.561209 | orchestrator | } 2025-10-09 10:53:36.561360 | orchestrator | 2025-10-09 10:53:36.561377 | orchestrator | # Ceph free space status 2025-10-09 10:53:36.561388 | orchestrator | 2025-10-09 10:53:36.561399 | orchestrator | + echo 2025-10-09 10:53:36.561410 | orchestrator | + echo '# Ceph free space status' 2025-10-09 10:53:36.561421 | orchestrator | + echo 2025-10-09 10:53:36.561432 | orchestrator | + ceph df 2025-10-09 10:53:37.163542 | orchestrator | --- RAW STORAGE --- 2025-10-09 10:53:37.163632 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-10-09 10:53:37.163656 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-10-09 10:53:37.163666 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-10-09 10:53:37.163676 | orchestrator | 2025-10-09 10:53:37.163686 | orchestrator | --- POOLS --- 2025-10-09 10:53:37.163697 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-10-09 10:53:37.163709 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-10-09 10:53:37.163719 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-10-09 10:53:37.163729 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-10-09 10:53:37.163738 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-10-09 10:53:37.163748 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-10-09 10:53:37.163757 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-10-09 10:53:37.163767 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-10-09 10:53:37.163776 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-10-09 10:53:37.163786 | orchestrator | .rgw.root 9 32 2.6 KiB 6 48 KiB 0 52 GiB 2025-10-09 10:53:37.163795 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-10-09 10:53:37.163805 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-10-09 10:53:37.163814 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2025-10-09 10:53:37.163824 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-10-09 10:53:37.163834 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-10-09 10:53:37.219825 | orchestrator | ++ semver 9.4.0 5.0.0 2025-10-09 10:53:37.286297 | orchestrator | + [[ 1 -eq -1 ]] 2025-10-09 10:53:37.286327 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-10-09 10:53:37.286337 | orchestrator | + osism apply facts 2025-10-09 10:53:39.463183 | orchestrator | 2025-10-09 10:53:39 | INFO  | Task 8b26d879-9e48-4e2a-a529-5c3a9835c0c9 (facts) was prepared for execution. 2025-10-09 10:53:39.463282 | orchestrator | 2025-10-09 10:53:39 | INFO  | It takes a moment until task 8b26d879-9e48-4e2a-a529-5c3a9835c0c9 (facts) has been started and output is visible here. 2025-10-09 10:53:53.639401 | orchestrator | 2025-10-09 10:53:53.639510 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-09 10:53:53.639526 | orchestrator | 2025-10-09 10:53:53.639538 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-09 10:53:53.639549 | orchestrator | Thursday 09 October 2025 10:53:44 +0000 (0:00:00.291) 0:00:00.291 ****** 2025-10-09 10:53:53.639560 | orchestrator | ok: [testbed-manager] 2025-10-09 10:53:53.639572 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:53:53.639583 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:53:53.639594 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:53:53.639605 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:53:53.639616 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:53:53.639627 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:53:53.639638 | orchestrator | 2025-10-09 10:53:53.639649 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-09 10:53:53.639686 | orchestrator | Thursday 09 October 2025 10:53:45 +0000 (0:00:01.756) 0:00:02.047 ****** 2025-10-09 10:53:53.639698 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:53:53.639710 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:53:53.639721 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:53:53.639731 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:53:53.639742 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:53:53.639796 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:53:53.639808 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:53:53.639819 | orchestrator | 2025-10-09 10:53:53.639830 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:53:53.639842 | orchestrator | 2025-10-09 10:53:53.639853 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:53:53.639864 | orchestrator | Thursday 09 October 2025 10:53:47 +0000 (0:00:01.423) 0:00:03.471 ****** 2025-10-09 10:53:53.639875 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:53:53.639886 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:53:53.639897 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:53:53.639908 | orchestrator | ok: [testbed-manager] 2025-10-09 10:53:53.639919 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:53:53.639930 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:53:53.639941 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:53:53.639953 | orchestrator | 2025-10-09 10:53:53.639966 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-09 10:53:53.639979 | orchestrator | 2025-10-09 10:53:53.639991 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-09 10:53:53.640003 | orchestrator | Thursday 09 October 2025 10:53:52 +0000 (0:00:05.364) 0:00:08.835 ****** 2025-10-09 10:53:53.640016 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:53:53.640053 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:53:53.640067 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:53:53.640079 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:53:53.640092 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:53:53.640105 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:53:53.640117 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:53:53.640129 | orchestrator | 2025-10-09 10:53:53.640141 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:53:53.640154 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:53:53.640168 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:53:53.640180 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:53:53.640209 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:53:53.640222 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:53:53.640235 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:53:53.640247 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:53:53.640259 | orchestrator | 2025-10-09 10:53:53.640271 | orchestrator | 2025-10-09 10:53:53.640284 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:53:53.640296 | orchestrator | Thursday 09 October 2025 10:53:53 +0000 (0:00:00.614) 0:00:09.449 ****** 2025-10-09 10:53:53.640308 | orchestrator | =============================================================================== 2025-10-09 10:53:53.640328 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.36s 2025-10-09 10:53:53.640339 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.76s 2025-10-09 10:53:53.640350 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.42s 2025-10-09 10:53:53.640361 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2025-10-09 10:53:53.968707 | orchestrator | + osism validate ceph-mons 2025-10-09 10:54:27.512613 | orchestrator | 2025-10-09 10:54:27.512751 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-10-09 10:54:27.512768 | orchestrator | 2025-10-09 10:54:27.512781 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-09 10:54:27.512793 | orchestrator | Thursday 09 October 2025 10:54:10 +0000 (0:00:00.454) 0:00:00.454 ****** 2025-10-09 10:54:27.512805 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:54:27.512816 | orchestrator | 2025-10-09 10:54:27.512827 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-09 10:54:27.512838 | orchestrator | Thursday 09 October 2025 10:54:11 +0000 (0:00:00.705) 0:00:01.160 ****** 2025-10-09 10:54:27.512849 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:54:27.512860 | orchestrator | 2025-10-09 10:54:27.512891 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-09 10:54:27.512902 | orchestrator | Thursday 09 October 2025 10:54:12 +0000 (0:00:00.928) 0:00:02.088 ****** 2025-10-09 10:54:27.512913 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.512926 | orchestrator | 2025-10-09 10:54:27.512937 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-10-09 10:54:27.512948 | orchestrator | Thursday 09 October 2025 10:54:12 +0000 (0:00:00.264) 0:00:02.353 ****** 2025-10-09 10:54:27.512959 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.512970 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:54:27.512981 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:54:27.512991 | orchestrator | 2025-10-09 10:54:27.513003 | orchestrator | TASK [Get container info] ****************************************************** 2025-10-09 10:54:27.513014 | orchestrator | Thursday 09 October 2025 10:54:13 +0000 (0:00:00.370) 0:00:02.723 ****** 2025-10-09 10:54:27.513065 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:54:27.513078 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:54:27.513089 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.513100 | orchestrator | 2025-10-09 10:54:27.513111 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-10-09 10:54:27.513123 | orchestrator | Thursday 09 October 2025 10:54:14 +0000 (0:00:01.122) 0:00:03.846 ****** 2025-10-09 10:54:27.513136 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.513149 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:54:27.513161 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:54:27.513174 | orchestrator | 2025-10-09 10:54:27.513186 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-10-09 10:54:27.513199 | orchestrator | Thursday 09 October 2025 10:54:14 +0000 (0:00:00.301) 0:00:04.147 ****** 2025-10-09 10:54:27.513212 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.513224 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:54:27.513237 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:54:27.513249 | orchestrator | 2025-10-09 10:54:27.513261 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:54:27.513273 | orchestrator | Thursday 09 October 2025 10:54:15 +0000 (0:00:00.557) 0:00:04.705 ****** 2025-10-09 10:54:27.513285 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.513298 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:54:27.513311 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:54:27.513323 | orchestrator | 2025-10-09 10:54:27.513336 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-10-09 10:54:27.513348 | orchestrator | Thursday 09 October 2025 10:54:15 +0000 (0:00:00.362) 0:00:05.067 ****** 2025-10-09 10:54:27.513385 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.513398 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:54:27.513410 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:54:27.513422 | orchestrator | 2025-10-09 10:54:27.513435 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-10-09 10:54:27.513448 | orchestrator | Thursday 09 October 2025 10:54:15 +0000 (0:00:00.344) 0:00:05.412 ****** 2025-10-09 10:54:27.513461 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.513474 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:54:27.513485 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:54:27.513495 | orchestrator | 2025-10-09 10:54:27.513506 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:54:27.513517 | orchestrator | Thursday 09 October 2025 10:54:16 +0000 (0:00:00.323) 0:00:05.735 ****** 2025-10-09 10:54:27.513528 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.513539 | orchestrator | 2025-10-09 10:54:27.513550 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:54:27.513560 | orchestrator | Thursday 09 October 2025 10:54:16 +0000 (0:00:00.757) 0:00:06.492 ****** 2025-10-09 10:54:27.513571 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.513582 | orchestrator | 2025-10-09 10:54:27.513592 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:54:27.513603 | orchestrator | Thursday 09 October 2025 10:54:17 +0000 (0:00:00.267) 0:00:06.760 ****** 2025-10-09 10:54:27.513614 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.513624 | orchestrator | 2025-10-09 10:54:27.513635 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:54:27.513646 | orchestrator | Thursday 09 October 2025 10:54:17 +0000 (0:00:00.288) 0:00:07.048 ****** 2025-10-09 10:54:27.513657 | orchestrator | 2025-10-09 10:54:27.513667 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:54:27.513678 | orchestrator | Thursday 09 October 2025 10:54:17 +0000 (0:00:00.072) 0:00:07.120 ****** 2025-10-09 10:54:27.513689 | orchestrator | 2025-10-09 10:54:27.513699 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:54:27.513710 | orchestrator | Thursday 09 October 2025 10:54:17 +0000 (0:00:00.072) 0:00:07.192 ****** 2025-10-09 10:54:27.513721 | orchestrator | 2025-10-09 10:54:27.513731 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:54:27.513742 | orchestrator | Thursday 09 October 2025 10:54:17 +0000 (0:00:00.099) 0:00:07.292 ****** 2025-10-09 10:54:27.513799 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.513811 | orchestrator | 2025-10-09 10:54:27.513821 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-10-09 10:54:27.513833 | orchestrator | Thursday 09 October 2025 10:54:17 +0000 (0:00:00.252) 0:00:07.545 ****** 2025-10-09 10:54:27.513844 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.513855 | orchestrator | 2025-10-09 10:54:27.513888 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-10-09 10:54:27.513899 | orchestrator | Thursday 09 October 2025 10:54:18 +0000 (0:00:00.323) 0:00:07.868 ****** 2025-10-09 10:54:27.513910 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.513921 | orchestrator | 2025-10-09 10:54:27.513932 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-10-09 10:54:27.513942 | orchestrator | Thursday 09 October 2025 10:54:18 +0000 (0:00:00.138) 0:00:08.007 ****** 2025-10-09 10:54:27.513953 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:54:27.513964 | orchestrator | 2025-10-09 10:54:27.513975 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-10-09 10:54:27.513986 | orchestrator | Thursday 09 October 2025 10:54:20 +0000 (0:00:01.716) 0:00:09.724 ****** 2025-10-09 10:54:27.514003 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.514014 | orchestrator | 2025-10-09 10:54:27.514096 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-10-09 10:54:27.514107 | orchestrator | Thursday 09 October 2025 10:54:20 +0000 (0:00:00.354) 0:00:10.078 ****** 2025-10-09 10:54:27.514128 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.514139 | orchestrator | 2025-10-09 10:54:27.514150 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-10-09 10:54:27.514161 | orchestrator | Thursday 09 October 2025 10:54:20 +0000 (0:00:00.334) 0:00:10.413 ****** 2025-10-09 10:54:27.514172 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.514183 | orchestrator | 2025-10-09 10:54:27.514194 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-10-09 10:54:27.514206 | orchestrator | Thursday 09 October 2025 10:54:21 +0000 (0:00:00.331) 0:00:10.744 ****** 2025-10-09 10:54:27.514216 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.514227 | orchestrator | 2025-10-09 10:54:27.514239 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-10-09 10:54:27.514250 | orchestrator | Thursday 09 October 2025 10:54:21 +0000 (0:00:00.330) 0:00:11.075 ****** 2025-10-09 10:54:27.514261 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.514272 | orchestrator | 2025-10-09 10:54:27.514283 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-10-09 10:54:27.514294 | orchestrator | Thursday 09 October 2025 10:54:21 +0000 (0:00:00.110) 0:00:11.185 ****** 2025-10-09 10:54:27.514305 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.514316 | orchestrator | 2025-10-09 10:54:27.514327 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-10-09 10:54:27.514338 | orchestrator | Thursday 09 October 2025 10:54:21 +0000 (0:00:00.141) 0:00:11.327 ****** 2025-10-09 10:54:27.514349 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.514360 | orchestrator | 2025-10-09 10:54:27.514371 | orchestrator | TASK [Gather status data] ****************************************************** 2025-10-09 10:54:27.514382 | orchestrator | Thursday 09 October 2025 10:54:21 +0000 (0:00:00.129) 0:00:11.457 ****** 2025-10-09 10:54:27.514393 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:54:27.514404 | orchestrator | 2025-10-09 10:54:27.514415 | orchestrator | TASK [Set health test data] **************************************************** 2025-10-09 10:54:27.514426 | orchestrator | Thursday 09 October 2025 10:54:23 +0000 (0:00:01.421) 0:00:12.879 ****** 2025-10-09 10:54:27.514437 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.514449 | orchestrator | 2025-10-09 10:54:27.514460 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-10-09 10:54:27.514471 | orchestrator | Thursday 09 October 2025 10:54:23 +0000 (0:00:00.317) 0:00:13.197 ****** 2025-10-09 10:54:27.514482 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.514493 | orchestrator | 2025-10-09 10:54:27.514504 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-10-09 10:54:27.514515 | orchestrator | Thursday 09 October 2025 10:54:23 +0000 (0:00:00.160) 0:00:13.357 ****** 2025-10-09 10:54:27.514526 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:54:27.514537 | orchestrator | 2025-10-09 10:54:27.514548 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-10-09 10:54:27.514559 | orchestrator | Thursday 09 October 2025 10:54:23 +0000 (0:00:00.150) 0:00:13.508 ****** 2025-10-09 10:54:27.514570 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.514581 | orchestrator | 2025-10-09 10:54:27.514592 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-10-09 10:54:27.514603 | orchestrator | Thursday 09 October 2025 10:54:24 +0000 (0:00:00.174) 0:00:13.682 ****** 2025-10-09 10:54:27.514614 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.514625 | orchestrator | 2025-10-09 10:54:27.514636 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-09 10:54:27.514647 | orchestrator | Thursday 09 October 2025 10:54:24 +0000 (0:00:00.373) 0:00:14.056 ****** 2025-10-09 10:54:27.514658 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:54:27.514669 | orchestrator | 2025-10-09 10:54:27.514680 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-09 10:54:27.514691 | orchestrator | Thursday 09 October 2025 10:54:24 +0000 (0:00:00.274) 0:00:14.331 ****** 2025-10-09 10:54:27.514709 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:54:27.514720 | orchestrator | 2025-10-09 10:54:27.514731 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:54:27.514742 | orchestrator | Thursday 09 October 2025 10:54:25 +0000 (0:00:00.268) 0:00:14.599 ****** 2025-10-09 10:54:27.514753 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:54:27.514764 | orchestrator | 2025-10-09 10:54:27.514775 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:54:27.514786 | orchestrator | Thursday 09 October 2025 10:54:26 +0000 (0:00:01.739) 0:00:16.339 ****** 2025-10-09 10:54:27.514797 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:54:27.514808 | orchestrator | 2025-10-09 10:54:27.514819 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:54:27.514830 | orchestrator | Thursday 09 October 2025 10:54:27 +0000 (0:00:00.271) 0:00:16.611 ****** 2025-10-09 10:54:27.514841 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:54:27.514852 | orchestrator | 2025-10-09 10:54:27.514870 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:54:30.544104 | orchestrator | Thursday 09 October 2025 10:54:27 +0000 (0:00:00.263) 0:00:16.874 ****** 2025-10-09 10:54:30.544234 | orchestrator | 2025-10-09 10:54:30.544250 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:54:30.544263 | orchestrator | Thursday 09 October 2025 10:54:27 +0000 (0:00:00.070) 0:00:16.945 ****** 2025-10-09 10:54:30.544274 | orchestrator | 2025-10-09 10:54:30.544289 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:54:30.544300 | orchestrator | Thursday 09 October 2025 10:54:27 +0000 (0:00:00.072) 0:00:17.018 ****** 2025-10-09 10:54:30.544311 | orchestrator | 2025-10-09 10:54:30.544322 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-09 10:54:30.544333 | orchestrator | Thursday 09 October 2025 10:54:27 +0000 (0:00:00.071) 0:00:17.090 ****** 2025-10-09 10:54:30.544344 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:54:30.544355 | orchestrator | 2025-10-09 10:54:30.544366 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:54:30.544377 | orchestrator | Thursday 09 October 2025 10:54:29 +0000 (0:00:01.761) 0:00:18.851 ****** 2025-10-09 10:54:30.544388 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-10-09 10:54:30.544422 | orchestrator |  "msg": [ 2025-10-09 10:54:30.544435 | orchestrator |  "Validator run completed.", 2025-10-09 10:54:30.544447 | orchestrator |  "You can find the report file here:", 2025-10-09 10:54:30.544458 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-10-09T10:54:11+00:00-report.json", 2025-10-09 10:54:30.544469 | orchestrator |  "on the following host:", 2025-10-09 10:54:30.544480 | orchestrator |  "testbed-manager" 2025-10-09 10:54:30.544491 | orchestrator |  ] 2025-10-09 10:54:30.544502 | orchestrator | } 2025-10-09 10:54:30.544513 | orchestrator | 2025-10-09 10:54:30.544524 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:54:30.544536 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-09 10:54:30.544547 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:54:30.544559 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:54:30.544571 | orchestrator | 2025-10-09 10:54:30.544584 | orchestrator | 2025-10-09 10:54:30.544596 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:54:30.544609 | orchestrator | Thursday 09 October 2025 10:54:30 +0000 (0:00:00.896) 0:00:19.748 ****** 2025-10-09 10:54:30.544650 | orchestrator | =============================================================================== 2025-10-09 10:54:30.544663 | orchestrator | Write report file ------------------------------------------------------- 1.76s 2025-10-09 10:54:30.544676 | orchestrator | Aggregate test results step one ----------------------------------------- 1.74s 2025-10-09 10:54:30.544687 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.72s 2025-10-09 10:54:30.544700 | orchestrator | Gather status data ------------------------------------------------------ 1.42s 2025-10-09 10:54:30.544712 | orchestrator | Get container info ------------------------------------------------------ 1.12s 2025-10-09 10:54:30.544724 | orchestrator | Create report output directory ------------------------------------------ 0.93s 2025-10-09 10:54:30.544736 | orchestrator | Print report file information ------------------------------------------- 0.90s 2025-10-09 10:54:30.544749 | orchestrator | Aggregate test results step one ----------------------------------------- 0.76s 2025-10-09 10:54:30.544760 | orchestrator | Get timestamp for report file ------------------------------------------- 0.71s 2025-10-09 10:54:30.544772 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2025-10-09 10:54:30.544784 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.37s 2025-10-09 10:54:30.544796 | orchestrator | Prepare test data for container existance test -------------------------- 0.37s 2025-10-09 10:54:30.544808 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2025-10-09 10:54:30.544820 | orchestrator | Set quorum test data ---------------------------------------------------- 0.35s 2025-10-09 10:54:30.544831 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.34s 2025-10-09 10:54:30.544844 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.33s 2025-10-09 10:54:30.544856 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-10-09 10:54:30.544869 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2025-10-09 10:54:30.544881 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.32s 2025-10-09 10:54:30.544894 | orchestrator | Fail due to missing containers ------------------------------------------ 0.32s 2025-10-09 10:54:30.891825 | orchestrator | + osism validate ceph-mgrs 2025-10-09 10:55:02.853342 | orchestrator | 2025-10-09 10:55:02.853476 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-10-09 10:55:02.853492 | orchestrator | 2025-10-09 10:55:02.853503 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-09 10:55:02.853514 | orchestrator | Thursday 09 October 2025 10:54:47 +0000 (0:00:00.483) 0:00:00.483 ****** 2025-10-09 10:55:02.853525 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:02.853536 | orchestrator | 2025-10-09 10:55:02.853546 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-09 10:55:02.853556 | orchestrator | Thursday 09 October 2025 10:54:48 +0000 (0:00:00.716) 0:00:01.199 ****** 2025-10-09 10:55:02.853565 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:02.853575 | orchestrator | 2025-10-09 10:55:02.853585 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-09 10:55:02.853595 | orchestrator | Thursday 09 October 2025 10:54:49 +0000 (0:00:00.932) 0:00:02.132 ****** 2025-10-09 10:55:02.853605 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.853616 | orchestrator | 2025-10-09 10:55:02.853626 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-10-09 10:55:02.853636 | orchestrator | Thursday 09 October 2025 10:54:49 +0000 (0:00:00.243) 0:00:02.375 ****** 2025-10-09 10:55:02.853645 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.853655 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:55:02.853684 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:55:02.853694 | orchestrator | 2025-10-09 10:55:02.853704 | orchestrator | TASK [Get container info] ****************************************************** 2025-10-09 10:55:02.853737 | orchestrator | Thursday 09 October 2025 10:54:49 +0000 (0:00:00.309) 0:00:02.685 ****** 2025-10-09 10:55:02.853748 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:55:02.853757 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.853767 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:55:02.853776 | orchestrator | 2025-10-09 10:55:02.853786 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-10-09 10:55:02.853795 | orchestrator | Thursday 09 October 2025 10:54:50 +0000 (0:00:01.086) 0:00:03.771 ****** 2025-10-09 10:55:02.853805 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.853816 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:55:02.853826 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:55:02.853837 | orchestrator | 2025-10-09 10:55:02.853848 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-10-09 10:55:02.853859 | orchestrator | Thursday 09 October 2025 10:54:51 +0000 (0:00:00.308) 0:00:04.079 ****** 2025-10-09 10:55:02.853870 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.853881 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:55:02.853892 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:55:02.853903 | orchestrator | 2025-10-09 10:55:02.853914 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:55:02.853925 | orchestrator | Thursday 09 October 2025 10:54:51 +0000 (0:00:00.526) 0:00:04.606 ****** 2025-10-09 10:55:02.853936 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.853947 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:55:02.853957 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:55:02.853968 | orchestrator | 2025-10-09 10:55:02.853979 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-10-09 10:55:02.853991 | orchestrator | Thursday 09 October 2025 10:54:52 +0000 (0:00:00.325) 0:00:04.931 ****** 2025-10-09 10:55:02.854002 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.854013 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:55:02.854094 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:55:02.854106 | orchestrator | 2025-10-09 10:55:02.854117 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-10-09 10:55:02.854128 | orchestrator | Thursday 09 October 2025 10:54:52 +0000 (0:00:00.306) 0:00:05.238 ****** 2025-10-09 10:55:02.854140 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.854151 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:55:02.854162 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:55:02.854173 | orchestrator | 2025-10-09 10:55:02.854184 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:55:02.854194 | orchestrator | Thursday 09 October 2025 10:54:52 +0000 (0:00:00.330) 0:00:05.568 ****** 2025-10-09 10:55:02.854204 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.854213 | orchestrator | 2025-10-09 10:55:02.854223 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:55:02.854233 | orchestrator | Thursday 09 October 2025 10:54:53 +0000 (0:00:00.712) 0:00:06.281 ****** 2025-10-09 10:55:02.854242 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.854252 | orchestrator | 2025-10-09 10:55:02.854262 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:55:02.854271 | orchestrator | Thursday 09 October 2025 10:54:53 +0000 (0:00:00.282) 0:00:06.563 ****** 2025-10-09 10:55:02.854281 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.854291 | orchestrator | 2025-10-09 10:55:02.854300 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:02.854310 | orchestrator | Thursday 09 October 2025 10:54:54 +0000 (0:00:00.288) 0:00:06.852 ****** 2025-10-09 10:55:02.854320 | orchestrator | 2025-10-09 10:55:02.854329 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:02.854339 | orchestrator | Thursday 09 October 2025 10:54:54 +0000 (0:00:00.075) 0:00:06.927 ****** 2025-10-09 10:55:02.854349 | orchestrator | 2025-10-09 10:55:02.854358 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:02.854376 | orchestrator | Thursday 09 October 2025 10:54:54 +0000 (0:00:00.083) 0:00:07.011 ****** 2025-10-09 10:55:02.854385 | orchestrator | 2025-10-09 10:55:02.854395 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:55:02.854404 | orchestrator | Thursday 09 October 2025 10:54:54 +0000 (0:00:00.080) 0:00:07.091 ****** 2025-10-09 10:55:02.854414 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.854424 | orchestrator | 2025-10-09 10:55:02.854434 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-10-09 10:55:02.854444 | orchestrator | Thursday 09 October 2025 10:54:54 +0000 (0:00:00.273) 0:00:07.365 ****** 2025-10-09 10:55:02.854453 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.854463 | orchestrator | 2025-10-09 10:55:02.854493 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-10-09 10:55:02.854503 | orchestrator | Thursday 09 October 2025 10:54:54 +0000 (0:00:00.266) 0:00:07.632 ****** 2025-10-09 10:55:02.854513 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.854523 | orchestrator | 2025-10-09 10:55:02.854533 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-10-09 10:55:02.854542 | orchestrator | Thursday 09 October 2025 10:54:54 +0000 (0:00:00.141) 0:00:07.774 ****** 2025-10-09 10:55:02.854552 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:55:02.854562 | orchestrator | 2025-10-09 10:55:02.854571 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-10-09 10:55:02.854581 | orchestrator | Thursday 09 October 2025 10:54:57 +0000 (0:00:02.065) 0:00:09.840 ****** 2025-10-09 10:55:02.854591 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.854600 | orchestrator | 2025-10-09 10:55:02.854610 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-10-09 10:55:02.854620 | orchestrator | Thursday 09 October 2025 10:54:57 +0000 (0:00:00.469) 0:00:10.309 ****** 2025-10-09 10:55:02.854629 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.854639 | orchestrator | 2025-10-09 10:55:02.854649 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-10-09 10:55:02.854658 | orchestrator | Thursday 09 October 2025 10:54:57 +0000 (0:00:00.333) 0:00:10.643 ****** 2025-10-09 10:55:02.854668 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.854678 | orchestrator | 2025-10-09 10:55:02.854688 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-10-09 10:55:02.854698 | orchestrator | Thursday 09 October 2025 10:54:57 +0000 (0:00:00.148) 0:00:10.791 ****** 2025-10-09 10:55:02.854707 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:02.854717 | orchestrator | 2025-10-09 10:55:02.854727 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-09 10:55:02.854737 | orchestrator | Thursday 09 October 2025 10:54:58 +0000 (0:00:00.145) 0:00:10.937 ****** 2025-10-09 10:55:02.854746 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:02.854756 | orchestrator | 2025-10-09 10:55:02.854766 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-09 10:55:02.854776 | orchestrator | Thursday 09 October 2025 10:54:58 +0000 (0:00:00.256) 0:00:11.193 ****** 2025-10-09 10:55:02.854785 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:02.854795 | orchestrator | 2025-10-09 10:55:02.854805 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:55:02.854814 | orchestrator | Thursday 09 October 2025 10:54:58 +0000 (0:00:00.236) 0:00:11.429 ****** 2025-10-09 10:55:02.854824 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:02.854834 | orchestrator | 2025-10-09 10:55:02.854843 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:55:02.854853 | orchestrator | Thursday 09 October 2025 10:54:59 +0000 (0:00:01.358) 0:00:12.788 ****** 2025-10-09 10:55:02.854863 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:02.854872 | orchestrator | 2025-10-09 10:55:02.854882 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:55:02.854898 | orchestrator | Thursday 09 October 2025 10:55:00 +0000 (0:00:00.276) 0:00:13.065 ****** 2025-10-09 10:55:02.854908 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:02.854918 | orchestrator | 2025-10-09 10:55:02.854927 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:02.854937 | orchestrator | Thursday 09 October 2025 10:55:00 +0000 (0:00:00.253) 0:00:13.318 ****** 2025-10-09 10:55:02.854947 | orchestrator | 2025-10-09 10:55:02.854956 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:02.854966 | orchestrator | Thursday 09 October 2025 10:55:00 +0000 (0:00:00.080) 0:00:13.398 ****** 2025-10-09 10:55:02.854976 | orchestrator | 2025-10-09 10:55:02.854985 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:02.854995 | orchestrator | Thursday 09 October 2025 10:55:00 +0000 (0:00:00.073) 0:00:13.472 ****** 2025-10-09 10:55:02.855004 | orchestrator | 2025-10-09 10:55:02.855014 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-09 10:55:02.855051 | orchestrator | Thursday 09 October 2025 10:55:00 +0000 (0:00:00.266) 0:00:13.738 ****** 2025-10-09 10:55:02.855061 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:02.855071 | orchestrator | 2025-10-09 10:55:02.855081 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:55:02.855090 | orchestrator | Thursday 09 October 2025 10:55:02 +0000 (0:00:01.471) 0:00:15.210 ****** 2025-10-09 10:55:02.855100 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-10-09 10:55:02.855110 | orchestrator |  "msg": [ 2025-10-09 10:55:02.855120 | orchestrator |  "Validator run completed.", 2025-10-09 10:55:02.855130 | orchestrator |  "You can find the report file here:", 2025-10-09 10:55:02.855140 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-10-09T10:54:48+00:00-report.json", 2025-10-09 10:55:02.855150 | orchestrator |  "on the following host:", 2025-10-09 10:55:02.855160 | orchestrator |  "testbed-manager" 2025-10-09 10:55:02.855170 | orchestrator |  ] 2025-10-09 10:55:02.855180 | orchestrator | } 2025-10-09 10:55:02.855190 | orchestrator | 2025-10-09 10:55:02.855200 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:55:02.855211 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:55:02.855221 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:02.855239 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:03.218654 | orchestrator | 2025-10-09 10:55:03.218757 | orchestrator | 2025-10-09 10:55:03.218772 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:55:03.218783 | orchestrator | Thursday 09 October 2025 10:55:02 +0000 (0:00:00.416) 0:00:15.626 ****** 2025-10-09 10:55:03.218794 | orchestrator | =============================================================================== 2025-10-09 10:55:03.218804 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.07s 2025-10-09 10:55:03.218814 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2025-10-09 10:55:03.218824 | orchestrator | Aggregate test results step one ----------------------------------------- 1.36s 2025-10-09 10:55:03.218855 | orchestrator | Get container info ------------------------------------------------------ 1.09s 2025-10-09 10:55:03.218866 | orchestrator | Create report output directory ------------------------------------------ 0.93s 2025-10-09 10:55:03.218876 | orchestrator | Get timestamp for report file ------------------------------------------- 0.72s 2025-10-09 10:55:03.218886 | orchestrator | Aggregate test results step one ----------------------------------------- 0.71s 2025-10-09 10:55:03.218919 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2025-10-09 10:55:03.218934 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.47s 2025-10-09 10:55:03.218944 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2025-10-09 10:55:03.218953 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-10-09 10:55:03.218963 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2025-10-09 10:55:03.218973 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.33s 2025-10-09 10:55:03.218983 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-10-09 10:55:03.218993 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-10-09 10:55:03.219003 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-10-09 10:55:03.219013 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2025-10-09 10:55:03.219069 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2025-10-09 10:55:03.219079 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-10-09 10:55:03.219089 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-10-09 10:55:03.559193 | orchestrator | + osism validate ceph-osds 2025-10-09 10:55:25.072654 | orchestrator | 2025-10-09 10:55:25.072756 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-10-09 10:55:25.072770 | orchestrator | 2025-10-09 10:55:25.072781 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-09 10:55:25.072792 | orchestrator | Thursday 09 October 2025 10:55:20 +0000 (0:00:00.471) 0:00:00.471 ****** 2025-10-09 10:55:25.072802 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:25.072812 | orchestrator | 2025-10-09 10:55:25.072822 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:55:25.072831 | orchestrator | Thursday 09 October 2025 10:55:21 +0000 (0:00:00.677) 0:00:01.149 ****** 2025-10-09 10:55:25.072841 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:25.072851 | orchestrator | 2025-10-09 10:55:25.072860 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-09 10:55:25.072870 | orchestrator | Thursday 09 October 2025 10:55:21 +0000 (0:00:00.396) 0:00:01.546 ****** 2025-10-09 10:55:25.072880 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:25.072890 | orchestrator | 2025-10-09 10:55:25.072899 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-09 10:55:25.072908 | orchestrator | Thursday 09 October 2025 10:55:22 +0000 (0:00:00.972) 0:00:02.518 ****** 2025-10-09 10:55:25.072918 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:25.072929 | orchestrator | 2025-10-09 10:55:25.072938 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-10-09 10:55:25.072948 | orchestrator | Thursday 09 October 2025 10:55:22 +0000 (0:00:00.128) 0:00:02.646 ****** 2025-10-09 10:55:25.072957 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:25.072967 | orchestrator | 2025-10-09 10:55:25.072977 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-10-09 10:55:25.072986 | orchestrator | Thursday 09 October 2025 10:55:22 +0000 (0:00:00.148) 0:00:02.795 ****** 2025-10-09 10:55:25.072996 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:25.073005 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:25.073041 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:25.073051 | orchestrator | 2025-10-09 10:55:25.073061 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-10-09 10:55:25.073070 | orchestrator | Thursday 09 October 2025 10:55:23 +0000 (0:00:00.355) 0:00:03.150 ****** 2025-10-09 10:55:25.073080 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:25.073090 | orchestrator | 2025-10-09 10:55:25.073117 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-10-09 10:55:25.073127 | orchestrator | Thursday 09 October 2025 10:55:23 +0000 (0:00:00.156) 0:00:03.307 ****** 2025-10-09 10:55:25.073136 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:25.073146 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:25.073156 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:25.073165 | orchestrator | 2025-10-09 10:55:25.073175 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-10-09 10:55:25.073185 | orchestrator | Thursday 09 October 2025 10:55:23 +0000 (0:00:00.338) 0:00:03.645 ****** 2025-10-09 10:55:25.073194 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:25.073205 | orchestrator | 2025-10-09 10:55:25.073217 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:55:25.073227 | orchestrator | Thursday 09 October 2025 10:55:24 +0000 (0:00:00.561) 0:00:04.207 ****** 2025-10-09 10:55:25.073239 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:25.073250 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:25.073261 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:25.073272 | orchestrator | 2025-10-09 10:55:25.073283 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-10-09 10:55:25.073295 | orchestrator | Thursday 09 October 2025 10:55:24 +0000 (0:00:00.478) 0:00:04.685 ****** 2025-10-09 10:55:25.073308 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1830285822b537501f18c1a50d451fca239c53777ee71a9c0e04d8816eeb839f', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:55:25.073322 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7728a9d75c769960c35b1b47274a87b60a764d394b84cb0c0d49b3c1bef580c4', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:55:25.073340 | orchestrator | skipping: [testbed-node-3] => (item={'id': '24186671ae12bb77e007f5872ec64f01dfbfd58a0c03b5ea5c100ecb5a6cfd29', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:55:25.073354 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4f5f10c14845164e46e40c0c26405738670ad2e8d59ce07dfd81193237457e72', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:55:25.073367 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aa45c45acff81529caf900ffe2dafe69c0b041898f67c0342a62aa747ab53d3e', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:55:25.073395 | orchestrator | skipping: [testbed-node-3] => (item={'id': '728065ed260db2200ab128db8fdb4ec92e5f9b2d965fa5f34a985fc845dce46a', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:55:25.073407 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1700b8f31ce73a9b60f5208b110451790c5da1ffc2aa5847f8b92c23af6d427b', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-10-09 10:55:25.073427 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e9670cde4c514e7fcdd34eb721a0d8883378677f80e936c8e4ea3c06ed84ef68', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-10-09 10:55:25.073439 | orchestrator | skipping: [testbed-node-3] => (item={'id': '23840e998ccf5ef86a94da9a79718cf5c08cbd1e4ef990159b7aebfad67d8a9f', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:55:25.073457 | orchestrator | skipping: [testbed-node-3] => (item={'id': '25fa3e1094c7c5bd767189a45a6fce314ada8d66f38452f610b64a88d8302604', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-10-09 10:55:25.073470 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bf22a1447cb4ed407ce1a8a3b26b95666fd8a8238649decee119c506aef7a81a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:55:25.073482 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c526a5e655f7d3886bcd35e06a16f4f6a90dd5a47dbe1201dc0639227a99e279', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:55:25.073494 | orchestrator | ok: [testbed-node-3] => (item={'id': '39128d6ca9a313d14ef850e372fa4031d6075580b2a79705bc10ff523d638910', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:55:25.073507 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ccde338f59226909c1a379733945a6510565b72160cca8cad62114b0f3fa2b4f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:55:25.073518 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8692c3f416df99298799e932165c7959ead7fb9b066c70d4c814d0eec801484d', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-10-09 10:55:25.073530 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4edcfb28123ed3b9a36611e2e6ec1284f4a894134eedacbe8ee6fcbbcecb36d9', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-09 10:55:25.073542 | orchestrator | skipping: [testbed-node-3] => (item={'id': '853554f1ca2268c8f9e2ab079632f3eda56215e8cb37dda597c621f12bf75c57', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-09 10:55:25.073557 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9c486521b881fdb7eae2498f0ae57e8516a575a14e6ab6b30b4cb44d274ef0c8', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-10-09 10:55:25.073567 | orchestrator | skipping: [testbed-node-3] => (item={'id': '014a503feb30cf71dceb5ae0e8aabf683886cbe88990926816d5a5a0c0e64d17', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:55:25.073577 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ad9a5777bd1bd77d8320f3e263f9c97a100f424318f3e6a3f7d3fe53e2fe5fc4', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:55:25.073593 | orchestrator | skipping: [testbed-node-4] => (item={'id': '47376cd025c0b3ee7573cae152dd079b8ca7bd33336950bacd0092f18df2b4ff', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:55:25.356191 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3fd7fed5a367bf9d902bf828a76f8471e12b0c19057043776faf5e0a7095537d', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:55:25.356273 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb05ab07b765e07ae540c2868b1f6bb490a05563cd9bacbb09541dabf060ff3b', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:55:25.356306 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3cb5232852447616f5956bd7830c162fddfa2a8f787a49ffd17dc18f20cade84', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:55:25.356318 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a9c05d522b9b8e931c98cd6cd3821688bb57ab17a5db6b5334d2d01a7afca86a', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:55:25.356327 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ef8f861600f6bca4248934fd3c3a9cce46f02ce35b84514d43e8659e964e08c6', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:55:25.356336 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'edc15bb048ec0abd7585811681b1e1ab0339ba3e45b2a5dd5da9b6c64a5d730d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-10-09 10:55:25.356346 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8283cb3c818b45d9767cbba6a8dab3ad97fa129882d9410022155639f9e09bb', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-10-09 10:55:25.356355 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4c7d464019e94cc2d140974023ccd894e341c4a5484764247200a379e5a802e5', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:55:25.356364 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0ae14115c058f58530fc90dbc4db744596da1d2b47d60dbe8cb29cad305aacfc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-10-09 10:55:25.356374 | orchestrator | skipping: [testbed-node-4] => (item={'id': '99d60979256328ec076c5b3ed3c33b86aeb18373633e621b0787334c012a282a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:55:25.356384 | orchestrator | skipping: [testbed-node-4] => (item={'id': '915bd0769bc04b71a4dedef93660ea8ba87fbc46868e0b1c71003606bab6bcb2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:55:25.356408 | orchestrator | ok: [testbed-node-4] => (item={'id': '88238d280d3ab0ce75df66d3de0df52ba483b5abaedf88a882d62d7729f8e3e0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:55:25.356418 | orchestrator | ok: [testbed-node-4] => (item={'id': '529a1a952e253912710c40d5e0ac5920e8d32a6006a75f36258425155d76a8f3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:55:25.356428 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8903f8ccf19c6f5cbe87edf4bdd48af0ec246890eae4e282898db41155f690b', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-10-09 10:55:25.356450 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e337e115eac148d5341668f71ca0733b75a31c38828a5a6995c18f27932ce78c', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-09 10:55:25.356465 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4a8c362d26bfdcc7428faf740af9068536a62d0123185b736c1417e02f46673a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-09 10:55:25.356475 | orchestrator | skipping: [testbed-node-4] => (item={'id': '86b71e26a8c5e026fdf117e459a42f2b87a61944d59045ffccce74dce5ce4f51', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-10-09 10:55:25.356484 | orchestrator | skipping: [testbed-node-4] => (item={'id': '67860f212558c3fe6967f0afb5ead31033189179f771f729b75c28dd43fb9646', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:55:25.356493 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2367fab19d675a51f33a326926d95603a69856f9d4165b67c1c3cf1dcc7d1865', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:55:25.356502 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2a6fa76b05fb74e0c816f048adfa0a1deb6f17a16dd460408886e9da93e9bfc3', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.1.1.20250928', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:55:25.356511 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fead46df63a635ff20adf82cae4bfe5321866eb13bec7dc61e60902e5f68e260', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20250928', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:55:25.356520 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ea9fdc36fdeea32c3a152aee68b11f343c17af57fdb831245f1dcdedf1dccb61', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250928', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:55:25.356529 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5a9ff704f46acdaf7e53b93bae8e495935de041d2b1fe8d71708766b2d51a614', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.1.1.20250928', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:55:25.356537 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1b2960cc9cfddcfb12acd3137ef380a9333b50f08f24a47a4d11ab59299e85f3', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250928', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:55:25.356546 | orchestrator | skipping: [testbed-node-5] => (item={'id': '408a16d06a1412d8f33f4ea2ad6f6639cb4d3b98f34075f6d4bce9c0e30df01f', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250928', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:55:25.356560 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b38d617d0895f24b85b0c70f43fb529c353a0e40cb54302c1b68c92505f698dd', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250928.0.20250928', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-10-09 10:55:25.356569 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9ea8f280b12d1c4c401b54bda7efc4b806bff7963f9ea0f22c9d423c508c4a6', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250928', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-10-09 10:55:25.356578 | orchestrator | skipping: [testbed-node-5] => (item={'id': '21ea6834b8862474e0aa9b2b0caa6591c0c712737e2dac9784b89a6952b21c87', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250928', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:55:25.356593 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b8803b9f7df65d917dd4895e6d5612cdde224f36e7089b32dc7b7b12fe200974', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-10-09 10:55:25.356607 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0c4e285fa8ee5852712623f785e3a94f978761faee7006ddbd08d5cafcf1244f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:55:33.504971 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f59c684a77620dc609fc9c272613664a6c799d59683079642b93c9f57a74fd8e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:55:33.505089 | orchestrator | ok: [testbed-node-5] => (item={'id': 'de4fd93ae5b4ea5c6aad3797fb36d0939760bf2af8f5f0b525142f8826eeb0d8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:55:33.505106 | orchestrator | ok: [testbed-node-5] => (item={'id': '536e29064cb6d71d1eadb7355d101c5559de260f74ced713f2e533a2a4dd3dfc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:55:33.505119 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd846207af3cb72237e59a317ab621092ae41946d5362367e320e5e36c8895b17', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.0.20250928', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-10-09 10:55:33.505132 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1f9f7b9ab0af522f3e8d4021f21077868b6ce182e30f773b508e77fe4aa1cfcb', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.0.20250928', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-09 10:55:33.505146 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bf2fec2075bfaff31f585847c0844e0d220b88e0e3c82087b57e908b755ec385', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.0.20250928', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-09 10:55:33.505157 | orchestrator | skipping: [testbed-node-5] => (item={'id': '93524d5e982b37d4727f01f5ae0571fbd60b6b85324dbe26186fc4a05d7a3f39', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250928', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-10-09 10:55:33.505169 | orchestrator | skipping: [testbed-node-5] => (item={'id': '189f6fa6d050459526712df9b9875cb61b51df8a35546e083f321777c1496112', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.6.0.20250928', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:55:33.505180 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ff954ee39509b1982536cdb267761c11b659a70a746a9aed66c97eb49def26b0', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250928', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:55:33.505191 | orchestrator | 2025-10-09 10:55:33.505203 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-10-09 10:55:33.505215 | orchestrator | Thursday 09 October 2025 10:55:25 +0000 (0:00:00.582) 0:00:05.267 ****** 2025-10-09 10:55:33.505226 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.505238 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:33.505248 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:33.505259 | orchestrator | 2025-10-09 10:55:33.505270 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-10-09 10:55:33.505281 | orchestrator | Thursday 09 October 2025 10:55:25 +0000 (0:00:00.336) 0:00:05.604 ****** 2025-10-09 10:55:33.505308 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.505345 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:33.505356 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:33.505367 | orchestrator | 2025-10-09 10:55:33.505378 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-10-09 10:55:33.505389 | orchestrator | Thursday 09 October 2025 10:55:25 +0000 (0:00:00.312) 0:00:05.916 ****** 2025-10-09 10:55:33.505400 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.505411 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:33.505422 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:33.505432 | orchestrator | 2025-10-09 10:55:33.505443 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:55:33.505454 | orchestrator | Thursday 09 October 2025 10:55:26 +0000 (0:00:00.547) 0:00:06.463 ****** 2025-10-09 10:55:33.505465 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.505475 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:33.505486 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:33.505497 | orchestrator | 2025-10-09 10:55:33.505508 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-10-09 10:55:33.505519 | orchestrator | Thursday 09 October 2025 10:55:26 +0000 (0:00:00.306) 0:00:06.770 ****** 2025-10-09 10:55:33.505531 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-10-09 10:55:33.505545 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-10-09 10:55:33.505557 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.505570 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-10-09 10:55:33.505582 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-10-09 10:55:33.505609 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:33.505623 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-10-09 10:55:33.505635 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-10-09 10:55:33.505647 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:33.505660 | orchestrator | 2025-10-09 10:55:33.505672 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-10-09 10:55:33.505685 | orchestrator | Thursday 09 October 2025 10:55:27 +0000 (0:00:00.344) 0:00:07.114 ****** 2025-10-09 10:55:33.505698 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.505710 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:33.505723 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:33.505735 | orchestrator | 2025-10-09 10:55:33.505748 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-10-09 10:55:33.505760 | orchestrator | Thursday 09 October 2025 10:55:27 +0000 (0:00:00.323) 0:00:07.438 ****** 2025-10-09 10:55:33.505773 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.505786 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:33.505797 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:33.505809 | orchestrator | 2025-10-09 10:55:33.505822 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-10-09 10:55:33.505834 | orchestrator | Thursday 09 October 2025 10:55:28 +0000 (0:00:00.563) 0:00:08.001 ****** 2025-10-09 10:55:33.505846 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.505858 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:33.505871 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:33.505883 | orchestrator | 2025-10-09 10:55:33.505895 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-10-09 10:55:33.505906 | orchestrator | Thursday 09 October 2025 10:55:28 +0000 (0:00:00.300) 0:00:08.302 ****** 2025-10-09 10:55:33.505917 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.505928 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:33.505938 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:33.505949 | orchestrator | 2025-10-09 10:55:33.505960 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:55:33.505978 | orchestrator | Thursday 09 October 2025 10:55:28 +0000 (0:00:00.339) 0:00:08.641 ****** 2025-10-09 10:55:33.505989 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.506000 | orchestrator | 2025-10-09 10:55:33.506112 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:55:33.506134 | orchestrator | Thursday 09 October 2025 10:55:29 +0000 (0:00:00.300) 0:00:08.942 ****** 2025-10-09 10:55:33.506145 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.506155 | orchestrator | 2025-10-09 10:55:33.506166 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:55:33.506178 | orchestrator | Thursday 09 October 2025 10:55:29 +0000 (0:00:00.291) 0:00:09.234 ****** 2025-10-09 10:55:33.506188 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.506199 | orchestrator | 2025-10-09 10:55:33.506210 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:33.506221 | orchestrator | Thursday 09 October 2025 10:55:29 +0000 (0:00:00.242) 0:00:09.477 ****** 2025-10-09 10:55:33.506232 | orchestrator | 2025-10-09 10:55:33.506242 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:33.506253 | orchestrator | Thursday 09 October 2025 10:55:29 +0000 (0:00:00.068) 0:00:09.545 ****** 2025-10-09 10:55:33.506264 | orchestrator | 2025-10-09 10:55:33.506274 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:33.506285 | orchestrator | Thursday 09 October 2025 10:55:29 +0000 (0:00:00.277) 0:00:09.823 ****** 2025-10-09 10:55:33.506296 | orchestrator | 2025-10-09 10:55:33.506306 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:55:33.506317 | orchestrator | Thursday 09 October 2025 10:55:29 +0000 (0:00:00.072) 0:00:09.895 ****** 2025-10-09 10:55:33.506328 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.506341 | orchestrator | 2025-10-09 10:55:33.506358 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-10-09 10:55:33.506375 | orchestrator | Thursday 09 October 2025 10:55:30 +0000 (0:00:00.298) 0:00:10.194 ****** 2025-10-09 10:55:33.506393 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:33.506410 | orchestrator | 2025-10-09 10:55:33.506427 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:55:33.506443 | orchestrator | Thursday 09 October 2025 10:55:30 +0000 (0:00:00.275) 0:00:10.470 ****** 2025-10-09 10:55:33.506460 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.506477 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:33.506495 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:33.506513 | orchestrator | 2025-10-09 10:55:33.506530 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-10-09 10:55:33.506548 | orchestrator | Thursday 09 October 2025 10:55:30 +0000 (0:00:00.325) 0:00:10.795 ****** 2025-10-09 10:55:33.506565 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.506585 | orchestrator | 2025-10-09 10:55:33.506602 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-10-09 10:55:33.506619 | orchestrator | Thursday 09 October 2025 10:55:31 +0000 (0:00:00.253) 0:00:11.049 ****** 2025-10-09 10:55:33.506631 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:55:33.506642 | orchestrator | 2025-10-09 10:55:33.506652 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-10-09 10:55:33.506663 | orchestrator | Thursday 09 October 2025 10:55:32 +0000 (0:00:01.735) 0:00:12.785 ****** 2025-10-09 10:55:33.506674 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.506685 | orchestrator | 2025-10-09 10:55:33.506696 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-10-09 10:55:33.506706 | orchestrator | Thursday 09 October 2025 10:55:32 +0000 (0:00:00.131) 0:00:12.917 ****** 2025-10-09 10:55:33.506717 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:33.506728 | orchestrator | 2025-10-09 10:55:33.506739 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-10-09 10:55:33.506761 | orchestrator | Thursday 09 October 2025 10:55:33 +0000 (0:00:00.385) 0:00:13.302 ****** 2025-10-09 10:55:33.506782 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:47.044658 | orchestrator | 2025-10-09 10:55:47.044776 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-10-09 10:55:47.044794 | orchestrator | Thursday 09 October 2025 10:55:33 +0000 (0:00:00.130) 0:00:13.432 ****** 2025-10-09 10:55:47.044807 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.044819 | orchestrator | 2025-10-09 10:55:47.044831 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:55:47.044843 | orchestrator | Thursday 09 October 2025 10:55:33 +0000 (0:00:00.361) 0:00:13.793 ****** 2025-10-09 10:55:47.044855 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.044866 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:47.044877 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:47.044888 | orchestrator | 2025-10-09 10:55:47.044899 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-10-09 10:55:47.044911 | orchestrator | Thursday 09 October 2025 10:55:34 +0000 (0:00:00.377) 0:00:14.171 ****** 2025-10-09 10:55:47.044922 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:55:47.044934 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:55:47.044945 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:55:47.044956 | orchestrator | 2025-10-09 10:55:47.044967 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-10-09 10:55:47.044979 | orchestrator | Thursday 09 October 2025 10:55:36 +0000 (0:00:02.395) 0:00:16.567 ****** 2025-10-09 10:55:47.044990 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.045001 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:47.045045 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:47.045057 | orchestrator | 2025-10-09 10:55:47.045069 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-10-09 10:55:47.045080 | orchestrator | Thursday 09 October 2025 10:55:36 +0000 (0:00:00.339) 0:00:16.907 ****** 2025-10-09 10:55:47.045091 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.045102 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:47.045113 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:47.045124 | orchestrator | 2025-10-09 10:55:47.045184 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-10-09 10:55:47.045199 | orchestrator | Thursday 09 October 2025 10:55:37 +0000 (0:00:00.705) 0:00:17.612 ****** 2025-10-09 10:55:47.045211 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:47.045224 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:47.045237 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:47.045250 | orchestrator | 2025-10-09 10:55:47.045262 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-10-09 10:55:47.045275 | orchestrator | Thursday 09 October 2025 10:55:37 +0000 (0:00:00.302) 0:00:17.915 ****** 2025-10-09 10:55:47.045288 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.045300 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:47.045313 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:47.045326 | orchestrator | 2025-10-09 10:55:47.045339 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-10-09 10:55:47.045352 | orchestrator | Thursday 09 October 2025 10:55:38 +0000 (0:00:00.359) 0:00:18.274 ****** 2025-10-09 10:55:47.045364 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:47.045377 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:47.045390 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:47.045403 | orchestrator | 2025-10-09 10:55:47.045415 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-10-09 10:55:47.045428 | orchestrator | Thursday 09 October 2025 10:55:38 +0000 (0:00:00.310) 0:00:18.585 ****** 2025-10-09 10:55:47.045441 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:47.045454 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:47.045466 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:47.045479 | orchestrator | 2025-10-09 10:55:47.045516 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:55:47.045529 | orchestrator | Thursday 09 October 2025 10:55:39 +0000 (0:00:00.534) 0:00:19.120 ****** 2025-10-09 10:55:47.045542 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.045553 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:47.045564 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:47.045574 | orchestrator | 2025-10-09 10:55:47.045585 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-10-09 10:55:47.045596 | orchestrator | Thursday 09 October 2025 10:55:39 +0000 (0:00:00.518) 0:00:19.638 ****** 2025-10-09 10:55:47.045607 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.045622 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:47.045633 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:47.045644 | orchestrator | 2025-10-09 10:55:47.045654 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-10-09 10:55:47.045665 | orchestrator | Thursday 09 October 2025 10:55:40 +0000 (0:00:00.526) 0:00:20.165 ****** 2025-10-09 10:55:47.045676 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.045686 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:47.045697 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:47.045708 | orchestrator | 2025-10-09 10:55:47.045718 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-10-09 10:55:47.045729 | orchestrator | Thursday 09 October 2025 10:55:40 +0000 (0:00:00.327) 0:00:20.492 ****** 2025-10-09 10:55:47.045740 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:47.045751 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:47.045761 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:47.045772 | orchestrator | 2025-10-09 10:55:47.045783 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-10-09 10:55:47.045793 | orchestrator | Thursday 09 October 2025 10:55:40 +0000 (0:00:00.290) 0:00:20.783 ****** 2025-10-09 10:55:47.045804 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:47.045815 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:47.045826 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:47.045836 | orchestrator | 2025-10-09 10:55:47.045847 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-09 10:55:47.045858 | orchestrator | Thursday 09 October 2025 10:55:41 +0000 (0:00:00.554) 0:00:21.338 ****** 2025-10-09 10:55:47.045868 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:47.045879 | orchestrator | 2025-10-09 10:55:47.045890 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-09 10:55:47.045901 | orchestrator | Thursday 09 October 2025 10:55:41 +0000 (0:00:00.269) 0:00:21.607 ****** 2025-10-09 10:55:47.045912 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:47.045922 | orchestrator | 2025-10-09 10:55:47.045951 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:55:47.045963 | orchestrator | Thursday 09 October 2025 10:55:41 +0000 (0:00:00.293) 0:00:21.901 ****** 2025-10-09 10:55:47.045974 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:47.045985 | orchestrator | 2025-10-09 10:55:47.045995 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:55:47.046006 | orchestrator | Thursday 09 October 2025 10:55:43 +0000 (0:00:01.787) 0:00:23.688 ****** 2025-10-09 10:55:47.046100 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:47.046112 | orchestrator | 2025-10-09 10:55:47.046123 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:55:47.046134 | orchestrator | Thursday 09 October 2025 10:55:44 +0000 (0:00:00.281) 0:00:23.969 ****** 2025-10-09 10:55:47.046178 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:47.046190 | orchestrator | 2025-10-09 10:55:47.046201 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:47.046212 | orchestrator | Thursday 09 October 2025 10:55:44 +0000 (0:00:00.263) 0:00:24.233 ****** 2025-10-09 10:55:47.046233 | orchestrator | 2025-10-09 10:55:47.046244 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:47.046255 | orchestrator | Thursday 09 October 2025 10:55:44 +0000 (0:00:00.069) 0:00:24.302 ****** 2025-10-09 10:55:47.046266 | orchestrator | 2025-10-09 10:55:47.046276 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:55:47.046287 | orchestrator | Thursday 09 October 2025 10:55:44 +0000 (0:00:00.070) 0:00:24.373 ****** 2025-10-09 10:55:47.046298 | orchestrator | 2025-10-09 10:55:47.046309 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-09 10:55:47.046319 | orchestrator | Thursday 09 October 2025 10:55:44 +0000 (0:00:00.087) 0:00:24.461 ****** 2025-10-09 10:55:47.046330 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:55:47.046341 | orchestrator | 2025-10-09 10:55:47.046352 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:55:47.046363 | orchestrator | Thursday 09 October 2025 10:55:46 +0000 (0:00:01.607) 0:00:26.069 ****** 2025-10-09 10:55:47.046373 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-10-09 10:55:47.046384 | orchestrator |  "msg": [ 2025-10-09 10:55:47.046396 | orchestrator |  "Validator run completed.", 2025-10-09 10:55:47.046407 | orchestrator |  "You can find the report file here:", 2025-10-09 10:55:47.046418 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-10-09T10:55:21+00:00-report.json", 2025-10-09 10:55:47.046430 | orchestrator |  "on the following host:", 2025-10-09 10:55:47.046441 | orchestrator |  "testbed-manager" 2025-10-09 10:55:47.046452 | orchestrator |  ] 2025-10-09 10:55:47.046463 | orchestrator | } 2025-10-09 10:55:47.046474 | orchestrator | 2025-10-09 10:55:47.046485 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:55:47.046497 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-10-09 10:55:47.046509 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:55:47.046520 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:55:47.046531 | orchestrator | 2025-10-09 10:55:47.046542 | orchestrator | 2025-10-09 10:55:47.046553 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:55:47.046564 | orchestrator | Thursday 09 October 2025 10:55:47 +0000 (0:00:00.872) 0:00:26.942 ****** 2025-10-09 10:55:47.046575 | orchestrator | =============================================================================== 2025-10-09 10:55:47.046591 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.40s 2025-10-09 10:55:47.046602 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2025-10-09 10:55:47.046613 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.74s 2025-10-09 10:55:47.046624 | orchestrator | Write report file ------------------------------------------------------- 1.61s 2025-10-09 10:55:47.046635 | orchestrator | Create report output directory ------------------------------------------ 0.97s 2025-10-09 10:55:47.046646 | orchestrator | Print report file information ------------------------------------------- 0.87s 2025-10-09 10:55:47.046657 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.71s 2025-10-09 10:55:47.046667 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-10-09 10:55:47.046694 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.58s 2025-10-09 10:55:47.046706 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.56s 2025-10-09 10:55:47.046728 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.56s 2025-10-09 10:55:47.046739 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.55s 2025-10-09 10:55:47.046756 | orchestrator | Set test result to passed if count matches ------------------------------ 0.55s 2025-10-09 10:55:47.046767 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.53s 2025-10-09 10:55:47.046778 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.53s 2025-10-09 10:55:47.046789 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2025-10-09 10:55:47.046809 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-10-09 10:55:47.349430 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2025-10-09 10:55:47.349516 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.40s 2025-10-09 10:55:47.349529 | orchestrator | Get OSDs that are not up or in ------------------------------------------ 0.39s 2025-10-09 10:55:47.697287 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-10-09 10:55:47.703570 | orchestrator | + set -e 2025-10-09 10:55:47.703603 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 10:55:47.703615 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 10:55:47.703626 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 10:55:47.703637 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 10:55:47.703648 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 10:55:47.703659 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 10:55:47.703671 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 10:55:47.703682 | orchestrator | ++ export MANAGER_VERSION=9.4.0 2025-10-09 10:55:47.703693 | orchestrator | ++ MANAGER_VERSION=9.4.0 2025-10-09 10:55:47.703704 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 10:55:47.703715 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 10:55:47.703726 | orchestrator | ++ export ARA=false 2025-10-09 10:55:47.703736 | orchestrator | ++ ARA=false 2025-10-09 10:55:47.703747 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 10:55:47.703758 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 10:55:47.703769 | orchestrator | ++ export TEMPEST=false 2025-10-09 10:55:47.703780 | orchestrator | ++ TEMPEST=false 2025-10-09 10:55:47.703791 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 10:55:47.703801 | orchestrator | ++ IS_ZUUL=true 2025-10-09 10:55:47.703812 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 10:55:47.703823 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2025-10-09 10:55:47.703834 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 10:55:47.703845 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 10:55:47.703856 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 10:55:47.703866 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 10:55:47.703877 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 10:55:47.703888 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 10:55:47.703899 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 10:55:47.703910 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 10:55:47.703920 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-10-09 10:55:47.703931 | orchestrator | + source /etc/os-release 2025-10-09 10:55:47.703942 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-10-09 10:55:47.703953 | orchestrator | ++ NAME=Ubuntu 2025-10-09 10:55:47.703964 | orchestrator | ++ VERSION_ID=24.04 2025-10-09 10:55:47.703975 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-10-09 10:55:47.703986 | orchestrator | ++ VERSION_CODENAME=noble 2025-10-09 10:55:47.703997 | orchestrator | ++ ID=ubuntu 2025-10-09 10:55:47.704007 | orchestrator | ++ ID_LIKE=debian 2025-10-09 10:55:47.704048 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-10-09 10:55:47.704059 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-10-09 10:55:47.704070 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-10-09 10:55:47.704081 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-10-09 10:55:47.704092 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-10-09 10:55:47.704103 | orchestrator | ++ LOGO=ubuntu-logo 2025-10-09 10:55:47.704114 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-10-09 10:55:47.704125 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-10-09 10:55:47.704137 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-10-09 10:55:47.740266 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-10-09 10:56:13.542418 | orchestrator | 2025-10-09 10:56:13.542530 | orchestrator | # Status of Elasticsearch 2025-10-09 10:56:13.542546 | orchestrator | 2025-10-09 10:56:13.542558 | orchestrator | + pushd /opt/configuration/contrib 2025-10-09 10:56:13.542571 | orchestrator | + echo 2025-10-09 10:56:13.542583 | orchestrator | + echo '# Status of Elasticsearch' 2025-10-09 10:56:13.542594 | orchestrator | + echo 2025-10-09 10:56:13.542605 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-10-09 10:56:13.765880 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-10-09 10:56:13.766395 | orchestrator | 2025-10-09 10:56:13.766920 | orchestrator | # Status of MariaDB 2025-10-09 10:56:13.766945 | orchestrator | 2025-10-09 10:56:13.766957 | orchestrator | + echo 2025-10-09 10:56:13.766969 | orchestrator | + echo '# Status of MariaDB' 2025-10-09 10:56:13.766980 | orchestrator | + echo 2025-10-09 10:56:13.766991 | orchestrator | + MARIADB_USER=root_shard_0 2025-10-09 10:56:13.767003 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-10-09 10:56:13.827850 | orchestrator | Reading package lists... 2025-10-09 10:56:14.198308 | orchestrator | Building dependency tree... 2025-10-09 10:56:14.199120 | orchestrator | Reading state information... 2025-10-09 10:56:14.668638 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-10-09 10:56:14.668727 | orchestrator | bc set to manually installed. 2025-10-09 10:56:14.668742 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-10-09 10:56:15.367377 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-10-09 10:56:15.368069 | orchestrator | 2025-10-09 10:56:15.368102 | orchestrator | # Status of Prometheus 2025-10-09 10:56:15.368116 | orchestrator | 2025-10-09 10:56:15.368129 | orchestrator | + echo 2025-10-09 10:56:15.368143 | orchestrator | + echo '# Status of Prometheus' 2025-10-09 10:56:15.368156 | orchestrator | + echo 2025-10-09 10:56:15.368169 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-10-09 10:56:15.431294 | orchestrator | Unauthorized 2025-10-09 10:56:15.434273 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-10-09 10:56:15.486632 | orchestrator | Unauthorized 2025-10-09 10:56:15.490319 | orchestrator | 2025-10-09 10:56:15.490359 | orchestrator | # Status of RabbitMQ 2025-10-09 10:56:15.490372 | orchestrator | 2025-10-09 10:56:15.490384 | orchestrator | + echo 2025-10-09 10:56:15.490395 | orchestrator | + echo '# Status of RabbitMQ' 2025-10-09 10:56:15.490406 | orchestrator | + echo 2025-10-09 10:56:15.490418 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-10-09 10:56:16.019866 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-10-09 10:56:16.029667 | orchestrator | 2025-10-09 10:56:16.029730 | orchestrator | # Status of Redis 2025-10-09 10:56:16.029744 | orchestrator | 2025-10-09 10:56:16.029756 | orchestrator | + echo 2025-10-09 10:56:16.029767 | orchestrator | + echo '# Status of Redis' 2025-10-09 10:56:16.029779 | orchestrator | + echo 2025-10-09 10:56:16.029792 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-10-09 10:56:16.036058 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001720s;;;0.000000;10.000000 2025-10-09 10:56:16.036601 | orchestrator | 2025-10-09 10:56:16.036699 | orchestrator | # Create backup of MariaDB database 2025-10-09 10:56:16.036707 | orchestrator | 2025-10-09 10:56:16.036712 | orchestrator | + popd 2025-10-09 10:56:16.036717 | orchestrator | + echo 2025-10-09 10:56:16.036721 | orchestrator | + echo '# Create backup of MariaDB database' 2025-10-09 10:56:16.036726 | orchestrator | + echo 2025-10-09 10:56:16.036731 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-10-09 10:56:18.136450 | orchestrator | 2025-10-09 10:56:18 | INFO  | Task 095db302-8cb5-4b5d-ae71-2fcd0ace4b6e (mariadb_backup) was prepared for execution. 2025-10-09 10:56:18.136573 | orchestrator | 2025-10-09 10:56:18 | INFO  | It takes a moment until task 095db302-8cb5-4b5d-ae71-2fcd0ace4b6e (mariadb_backup) has been started and output is visible here. 2025-10-09 10:58:46.348649 | orchestrator | 2025-10-09 10:58:46.348757 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:58:46.348773 | orchestrator | 2025-10-09 10:58:46.348785 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:58:46.348797 | orchestrator | Thursday 09 October 2025 10:56:22 +0000 (0:00:00.200) 0:00:00.200 ****** 2025-10-09 10:58:46.348808 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:58:46.348820 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:58:46.348831 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:58:46.348842 | orchestrator | 2025-10-09 10:58:46.348853 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:58:46.348864 | orchestrator | Thursday 09 October 2025 10:56:22 +0000 (0:00:00.346) 0:00:00.547 ****** 2025-10-09 10:58:46.348875 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-10-09 10:58:46.348887 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-10-09 10:58:46.348898 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-10-09 10:58:46.348909 | orchestrator | 2025-10-09 10:58:46.348920 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-10-09 10:58:46.348931 | orchestrator | 2025-10-09 10:58:46.348942 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-10-09 10:58:46.348953 | orchestrator | Thursday 09 October 2025 10:56:23 +0000 (0:00:00.633) 0:00:01.180 ****** 2025-10-09 10:58:46.348964 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:58:46.348975 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-09 10:58:46.348986 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-09 10:58:46.348997 | orchestrator | 2025-10-09 10:58:46.349008 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:58:46.349020 | orchestrator | Thursday 09 October 2025 10:56:23 +0000 (0:00:00.394) 0:00:01.575 ****** 2025-10-09 10:58:46.349032 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:58:46.349043 | orchestrator | 2025-10-09 10:58:46.349054 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-10-09 10:58:46.349065 | orchestrator | Thursday 09 October 2025 10:56:24 +0000 (0:00:00.576) 0:00:02.152 ****** 2025-10-09 10:58:46.349109 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:58:46.349120 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:58:46.349132 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:58:46.349142 | orchestrator | 2025-10-09 10:58:46.349154 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-10-09 10:58:46.349165 | orchestrator | Thursday 09 October 2025 10:56:27 +0000 (0:00:03.282) 0:00:05.435 ****** 2025-10-09 10:58:46.349176 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-10-09 10:58:46.349187 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-10-09 10:58:46.349200 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-09 10:58:46.349212 | orchestrator | mariadb_bootstrap_restart 2025-10-09 10:58:46.349225 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:58:46.349237 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:58:46.349250 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:58:46.349262 | orchestrator | 2025-10-09 10:58:46.349274 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-10-09 10:58:46.349286 | orchestrator | skipping: no hosts matched 2025-10-09 10:58:46.349299 | orchestrator | 2025-10-09 10:58:46.349311 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-09 10:58:46.349324 | orchestrator | skipping: no hosts matched 2025-10-09 10:58:46.349336 | orchestrator | 2025-10-09 10:58:46.349349 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-10-09 10:58:46.349361 | orchestrator | skipping: no hosts matched 2025-10-09 10:58:46.349373 | orchestrator | 2025-10-09 10:58:46.349408 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-10-09 10:58:46.349419 | orchestrator | 2025-10-09 10:58:46.349430 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-10-09 10:58:46.349441 | orchestrator | Thursday 09 October 2025 10:58:45 +0000 (0:02:17.762) 0:02:23.197 ****** 2025-10-09 10:58:46.349452 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:58:46.349463 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:58:46.349474 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:58:46.349485 | orchestrator | 2025-10-09 10:58:46.349496 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-10-09 10:58:46.349507 | orchestrator | Thursday 09 October 2025 10:58:45 +0000 (0:00:00.321) 0:02:23.519 ****** 2025-10-09 10:58:46.349517 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:58:46.349528 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:58:46.349539 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:58:46.349550 | orchestrator | 2025-10-09 10:58:46.349561 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:58:46.349574 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:58:46.349585 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:58:46.349596 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:58:46.349607 | orchestrator | 2025-10-09 10:58:46.349618 | orchestrator | 2025-10-09 10:58:46.349629 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:58:46.349640 | orchestrator | Thursday 09 October 2025 10:58:45 +0000 (0:00:00.413) 0:02:23.932 ****** 2025-10-09 10:58:46.349651 | orchestrator | =============================================================================== 2025-10-09 10:58:46.349662 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 137.76s 2025-10-09 10:58:46.349689 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.28s 2025-10-09 10:58:46.349701 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-10-09 10:58:46.349712 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2025-10-09 10:58:46.349723 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2025-10-09 10:58:46.349734 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-10-09 10:58:46.349745 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-10-09 10:58:46.349756 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2025-10-09 10:58:46.682117 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-10-09 10:58:46.689873 | orchestrator | + set -e 2025-10-09 10:58:46.689901 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 10:58:46.689914 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 10:58:46.689925 | orchestrator | ++ INTERACTIVE=false 2025-10-09 10:58:46.689936 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 10:58:46.689948 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 10:58:46.689959 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-09 10:58:46.690755 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-09 10:58:46.695307 | orchestrator | 2025-10-09 10:58:46.695333 | orchestrator | # OpenStack endpoints 2025-10-09 10:58:46.695344 | orchestrator | 2025-10-09 10:58:46.695355 | orchestrator | ++ export MANAGER_VERSION=9.4.0 2025-10-09 10:58:46.695366 | orchestrator | ++ MANAGER_VERSION=9.4.0 2025-10-09 10:58:46.695377 | orchestrator | + export OS_CLOUD=admin 2025-10-09 10:58:46.695388 | orchestrator | + OS_CLOUD=admin 2025-10-09 10:58:46.695416 | orchestrator | + echo 2025-10-09 10:58:46.695427 | orchestrator | + echo '# OpenStack endpoints' 2025-10-09 10:58:46.695439 | orchestrator | + echo 2025-10-09 10:58:46.695450 | orchestrator | + openstack endpoint list 2025-10-09 10:58:50.112993 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-09 10:58:50.113134 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-10-09 10:58:50.113150 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-09 10:58:50.113162 | orchestrator | | 153f3c806a9d426ab3259a31539ddc6a | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-10-09 10:58:50.113173 | orchestrator | | 15e98d26c47e4b0f8d2ffea2d7f3e690 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-10-09 10:58:50.113185 | orchestrator | | 20123a49a34f417da62e41bf2e0fc4e5 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-10-09 10:58:50.113196 | orchestrator | | 37e223def7104138b4be777c354f2688 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-10-09 10:58:50.113208 | orchestrator | | 58752a735ea44ab4a06215400e11948c | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-10-09 10:58:50.113236 | orchestrator | | 59a967a661c14693b46e22c5aacc6601 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-10-09 10:58:50.113248 | orchestrator | | 6645887e67b24c08b1bb64a04ade051e | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-10-09 10:58:50.113259 | orchestrator | | 66edcc83ac554444917e205fe770d8f3 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-10-09 10:58:50.113270 | orchestrator | | 6747ccc4b3f4435389645cdcde4ee534 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-10-09 10:58:50.113281 | orchestrator | | 699772536dde40e6bfc374f42395e10b | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-10-09 10:58:50.113292 | orchestrator | | 77dc58a1f7964e898ecf247026a91195 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-10-09 10:58:50.113303 | orchestrator | | 7a989f606a834604b12f6cbccbcb1627 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-10-09 10:58:50.113314 | orchestrator | | 8106d2ee8754457aac6737000657dcba | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-10-09 10:58:50.113325 | orchestrator | | 8dcc6837617346f3bb07260ef5596c25 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-10-09 10:58:50.113336 | orchestrator | | 9f7ea2373d7a4147b02976d2f58a60a8 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-10-09 10:58:50.113347 | orchestrator | | adf5fbd71e7340879198fd0b3c7aa5f6 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-10-09 10:58:50.113358 | orchestrator | | cd16459b03c442a5944aca0fdbe42615 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-10-09 10:58:50.113369 | orchestrator | | d65ad65560fb460d97704e780e1c4bce | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-10-09 10:58:50.113402 | orchestrator | | d76fd3db569646699401919d5328fddd | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-10-09 10:58:50.113413 | orchestrator | | d81084c6c0ce4d91b67b722413797203 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-10-09 10:58:50.113441 | orchestrator | | ddf3914dd099451bb6be1f495d1252ee | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-10-09 10:58:50.113453 | orchestrator | | efa26d8db2b04b71b6e8a2c1384f7785 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-10-09 10:58:50.113464 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-09 10:58:50.405391 | orchestrator | 2025-10-09 10:58:50.405483 | orchestrator | # Cinder 2025-10-09 10:58:50.405497 | orchestrator | 2025-10-09 10:58:50.405509 | orchestrator | + echo 2025-10-09 10:58:50.405520 | orchestrator | + echo '# Cinder' 2025-10-09 10:58:50.405531 | orchestrator | + echo 2025-10-09 10:58:50.405543 | orchestrator | + openstack volume service list 2025-10-09 10:58:53.176887 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-09 10:58:53.176999 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-10-09 10:58:53.177013 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-09 10:58:53.177025 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-10-09T10:58:51.000000 | 2025-10-09 10:58:53.177036 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-10-09T10:58:43.000000 | 2025-10-09 10:58:53.177047 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-10-09T10:58:44.000000 | 2025-10-09 10:58:53.177058 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-10-09T10:58:43.000000 | 2025-10-09 10:58:53.177069 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-10-09T10:58:43.000000 | 2025-10-09 10:58:53.177129 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-10-09T10:58:43.000000 | 2025-10-09 10:58:53.177142 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-10-09T10:58:44.000000 | 2025-10-09 10:58:53.177153 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-10-09T10:58:45.000000 | 2025-10-09 10:58:53.177164 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-10-09T10:58:45.000000 | 2025-10-09 10:58:53.177175 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-09 10:58:53.483326 | orchestrator | 2025-10-09 10:58:53.483409 | orchestrator | # Neutron 2025-10-09 10:58:53.483421 | orchestrator | 2025-10-09 10:58:53.483430 | orchestrator | + echo 2025-10-09 10:58:53.483439 | orchestrator | + echo '# Neutron' 2025-10-09 10:58:53.483448 | orchestrator | + echo 2025-10-09 10:58:53.483457 | orchestrator | + openstack network agent list 2025-10-09 10:58:56.922938 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-09 10:58:56.923038 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-10-09 10:58:56.923053 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-09 10:58:56.923121 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-10-09 10:58:56.923134 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-10-09 10:58:56.923146 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-10-09 10:58:56.923157 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-10-09 10:58:56.923168 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-10-09 10:58:56.923179 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-10-09 10:58:56.923189 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-09 10:58:56.923200 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-09 10:58:56.923211 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-09 10:58:56.923222 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-09 10:58:57.224397 | orchestrator | + openstack network service provider list 2025-10-09 10:58:59.935659 | orchestrator | +---------------+------+---------+ 2025-10-09 10:58:59.935754 | orchestrator | | Service Type | Name | Default | 2025-10-09 10:58:59.935767 | orchestrator | +---------------+------+---------+ 2025-10-09 10:58:59.935777 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-10-09 10:58:59.935788 | orchestrator | +---------------+------+---------+ 2025-10-09 10:59:00.270076 | orchestrator | 2025-10-09 10:59:00.270180 | orchestrator | # Nova 2025-10-09 10:59:00.270193 | orchestrator | 2025-10-09 10:59:00.270203 | orchestrator | + echo 2025-10-09 10:59:00.270213 | orchestrator | + echo '# Nova' 2025-10-09 10:59:00.270223 | orchestrator | + echo 2025-10-09 10:59:00.270233 | orchestrator | + openstack compute service list 2025-10-09 10:59:03.242545 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-09 10:59:03.242641 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-10-09 10:59:03.242655 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-09 10:59:03.242666 | orchestrator | | f2a4be42-7dc1-4221-8132-68b590d47812 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-10-09T10:58:59.000000 | 2025-10-09 10:59:03.242677 | orchestrator | | aa8ccd02-5a9c-440b-9742-53d59b30e937 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-10-09T10:59:00.000000 | 2025-10-09 10:59:03.242688 | orchestrator | | f4cedf25-345c-4b61-8bd4-591a582bfa84 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-10-09T10:59:02.000000 | 2025-10-09 10:59:03.242699 | orchestrator | | 1f7338aa-6e55-47e8-a438-4258e043ae08 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-10-09T10:59:00.000000 | 2025-10-09 10:59:03.242710 | orchestrator | | 79d40a84-e7d2-4a09-a3fd-ca4f0f27412b | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-10-09T10:58:53.000000 | 2025-10-09 10:59:03.242720 | orchestrator | | 844a1c1c-f82d-4d69-99ac-9de732b97ec8 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-10-09T10:58:54.000000 | 2025-10-09 10:59:03.242751 | orchestrator | | 850d99b8-0ba8-453d-8f11-92a821f10cc2 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-10-09T10:58:57.000000 | 2025-10-09 10:59:03.242784 | orchestrator | | 7d815c86-9f6d-47a3-8a13-8ae40977d0ba | nova-compute | testbed-node-4 | nova | enabled | up | 2025-10-09T10:58:58.000000 | 2025-10-09 10:59:03.242796 | orchestrator | | 25b1e1b8-0fe1-4e1a-a267-5c4077057094 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-10-09T10:58:58.000000 | 2025-10-09 10:59:03.242807 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-09 10:59:03.541153 | orchestrator | + openstack hypervisor list 2025-10-09 10:59:06.842279 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-09 10:59:06.842384 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-10-09 10:59:06.842399 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-09 10:59:06.842411 | orchestrator | | d92334f4-5e3c-4d5a-99de-aeaf69ffe75b | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-10-09 10:59:06.842422 | orchestrator | | aa6a48c2-6f88-4d85-b0fe-68a379993782 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-10-09 10:59:06.842434 | orchestrator | | f95ebab8-442c-4ec2-82cf-586ab7b56c2a | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-10-09 10:59:06.842445 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-09 10:59:07.134425 | orchestrator | 2025-10-09 10:59:07.134509 | orchestrator | # Run OpenStack test play 2025-10-09 10:59:07.134523 | orchestrator | 2025-10-09 10:59:07.134535 | orchestrator | + echo 2025-10-09 10:59:07.134547 | orchestrator | + echo '# Run OpenStack test play' 2025-10-09 10:59:07.134559 | orchestrator | + echo 2025-10-09 10:59:07.134571 | orchestrator | + osism apply --environment openstack test 2025-10-09 10:59:09.177592 | orchestrator | 2025-10-09 10:59:09 | INFO  | Trying to run play test in environment openstack 2025-10-09 10:59:19.296819 | orchestrator | 2025-10-09 10:59:19 | INFO  | Task 2ea097ff-f83f-497d-a2d7-eb12fbdcc518 (test) was prepared for execution. 2025-10-09 10:59:19.296930 | orchestrator | 2025-10-09 10:59:19 | INFO  | It takes a moment until task 2ea097ff-f83f-497d-a2d7-eb12fbdcc518 (test) has been started and output is visible here. 2025-10-09 11:06:26.398219 | orchestrator | 2025-10-09 11:06:26.398369 | orchestrator | PLAY [Create test project] ***************************************************** 2025-10-09 11:06:26.398389 | orchestrator | 2025-10-09 11:06:26.398401 | orchestrator | TASK [Create test domain] ****************************************************** 2025-10-09 11:06:26.398413 | orchestrator | Thursday 09 October 2025 10:59:23 +0000 (0:00:00.082) 0:00:00.082 ****** 2025-10-09 11:06:26.398424 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398436 | orchestrator | 2025-10-09 11:06:26.398447 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-10-09 11:06:26.398458 | orchestrator | Thursday 09 October 2025 10:59:27 +0000 (0:00:03.886) 0:00:03.969 ****** 2025-10-09 11:06:26.398469 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398480 | orchestrator | 2025-10-09 11:06:26.398491 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-10-09 11:06:26.398502 | orchestrator | Thursday 09 October 2025 10:59:31 +0000 (0:00:04.390) 0:00:08.359 ****** 2025-10-09 11:06:26.398513 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398570 | orchestrator | 2025-10-09 11:06:26.398583 | orchestrator | TASK [Create test project] ***************************************************** 2025-10-09 11:06:26.398594 | orchestrator | Thursday 09 October 2025 10:59:38 +0000 (0:00:06.731) 0:00:15.090 ****** 2025-10-09 11:06:26.398605 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398616 | orchestrator | 2025-10-09 11:06:26.398627 | orchestrator | TASK [Create test user] ******************************************************** 2025-10-09 11:06:26.398638 | orchestrator | Thursday 09 October 2025 10:59:42 +0000 (0:00:04.139) 0:00:19.230 ****** 2025-10-09 11:06:26.398649 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398660 | orchestrator | 2025-10-09 11:06:26.398671 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-10-09 11:06:26.398713 | orchestrator | Thursday 09 October 2025 10:59:46 +0000 (0:00:04.389) 0:00:23.620 ****** 2025-10-09 11:06:26.398733 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-10-09 11:06:26.398753 | orchestrator | changed: [localhost] => (item=member) 2025-10-09 11:06:26.398774 | orchestrator | changed: [localhost] => (item=creator) 2025-10-09 11:06:26.398794 | orchestrator | 2025-10-09 11:06:26.398813 | orchestrator | TASK [Create test server group] ************************************************ 2025-10-09 11:06:26.398825 | orchestrator | Thursday 09 October 2025 10:59:59 +0000 (0:00:12.576) 0:00:36.196 ****** 2025-10-09 11:06:26.398836 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398847 | orchestrator | 2025-10-09 11:06:26.398858 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-10-09 11:06:26.398869 | orchestrator | Thursday 09 October 2025 11:00:04 +0000 (0:00:04.578) 0:00:40.775 ****** 2025-10-09 11:06:26.398879 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398890 | orchestrator | 2025-10-09 11:06:26.398901 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-10-09 11:06:26.398912 | orchestrator | Thursday 09 October 2025 11:00:09 +0000 (0:00:05.274) 0:00:46.050 ****** 2025-10-09 11:06:26.398923 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398933 | orchestrator | 2025-10-09 11:06:26.398944 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-10-09 11:06:26.398955 | orchestrator | Thursday 09 October 2025 11:00:13 +0000 (0:00:04.506) 0:00:50.556 ****** 2025-10-09 11:06:26.398965 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.398976 | orchestrator | 2025-10-09 11:06:26.398987 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-10-09 11:06:26.398997 | orchestrator | Thursday 09 October 2025 11:00:18 +0000 (0:00:04.141) 0:00:54.698 ****** 2025-10-09 11:06:26.399022 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.399033 | orchestrator | 2025-10-09 11:06:26.399044 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-10-09 11:06:26.399055 | orchestrator | Thursday 09 October 2025 11:00:22 +0000 (0:00:04.234) 0:00:58.932 ****** 2025-10-09 11:06:26.399071 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.399090 | orchestrator | 2025-10-09 11:06:26.399108 | orchestrator | TASK [Create test network topology] ******************************************** 2025-10-09 11:06:26.399127 | orchestrator | Thursday 09 October 2025 11:00:26 +0000 (0:00:04.178) 0:01:03.111 ****** 2025-10-09 11:06:26.399146 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.399158 | orchestrator | 2025-10-09 11:06:26.399169 | orchestrator | TASK [Create test instances] *************************************************** 2025-10-09 11:06:26.399179 | orchestrator | Thursday 09 October 2025 11:00:42 +0000 (0:00:15.773) 0:01:18.884 ****** 2025-10-09 11:06:26.399190 | orchestrator | changed: [localhost] => (item=test) 2025-10-09 11:06:26.399201 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-09 11:06:26.399212 | orchestrator | 2025-10-09 11:06:26.399223 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:06:26.399234 | orchestrator | 2025-10-09 11:06:26.399244 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:06:26.399255 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-09 11:06:26.399266 | orchestrator | 2025-10-09 11:06:26.399276 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:06:26.399287 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-09 11:06:26.399297 | orchestrator | 2025-10-09 11:06:26.399308 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:06:26.399319 | orchestrator | 2025-10-09 11:06:26.399329 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:06:26.399340 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-09 11:06:26.399351 | orchestrator | 2025-10-09 11:06:26.399361 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-10-09 11:06:26.399372 | orchestrator | Thursday 09 October 2025 11:04:59 +0000 (0:04:17.137) 0:05:36.022 ****** 2025-10-09 11:06:26.399393 | orchestrator | changed: [localhost] => (item=test) 2025-10-09 11:06:26.399404 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-09 11:06:26.399414 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-09 11:06:26.399425 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-09 11:06:26.399436 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-09 11:06:26.399446 | orchestrator | 2025-10-09 11:06:26.399462 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-10-09 11:06:26.399494 | orchestrator | Thursday 09 October 2025 11:05:24 +0000 (0:00:24.813) 0:06:00.836 ****** 2025-10-09 11:06:26.399506 | orchestrator | changed: [localhost] => (item=test) 2025-10-09 11:06:26.399517 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-09 11:06:26.399555 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-09 11:06:26.399567 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-09 11:06:26.399578 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-09 11:06:26.399589 | orchestrator | 2025-10-09 11:06:26.399599 | orchestrator | TASK [Create test volume] ****************************************************** 2025-10-09 11:06:26.399610 | orchestrator | Thursday 09 October 2025 11:05:59 +0000 (0:00:35.696) 0:06:36.532 ****** 2025-10-09 11:06:26.399626 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.399645 | orchestrator | 2025-10-09 11:06:26.399664 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-10-09 11:06:26.399683 | orchestrator | Thursday 09 October 2025 11:06:06 +0000 (0:00:06.608) 0:06:43.141 ****** 2025-10-09 11:06:26.399701 | orchestrator | changed: [localhost] 2025-10-09 11:06:26.399718 | orchestrator | 2025-10-09 11:06:26.399736 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-10-09 11:06:26.399755 | orchestrator | Thursday 09 October 2025 11:06:20 +0000 (0:00:13.657) 0:06:56.798 ****** 2025-10-09 11:06:26.399774 | orchestrator | ok: [localhost] 2025-10-09 11:06:26.399787 | orchestrator | 2025-10-09 11:06:26.399798 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-10-09 11:06:26.399808 | orchestrator | Thursday 09 October 2025 11:06:26 +0000 (0:00:05.931) 0:07:02.729 ****** 2025-10-09 11:06:26.399819 | orchestrator | ok: [localhost] => { 2025-10-09 11:06:26.399830 | orchestrator |  "msg": "192.168.112.121" 2025-10-09 11:06:26.399841 | orchestrator | } 2025-10-09 11:06:26.399852 | orchestrator | 2025-10-09 11:06:26.399862 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 11:06:26.399873 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 11:06:26.399886 | orchestrator | 2025-10-09 11:06:26.399897 | orchestrator | 2025-10-09 11:06:26.399907 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 11:06:26.399918 | orchestrator | Thursday 09 October 2025 11:06:26 +0000 (0:00:00.052) 0:07:02.782 ****** 2025-10-09 11:06:26.399928 | orchestrator | =============================================================================== 2025-10-09 11:06:26.399939 | orchestrator | Create test instances ------------------------------------------------- 257.14s 2025-10-09 11:06:26.399950 | orchestrator | Add tag to instances --------------------------------------------------- 35.70s 2025-10-09 11:06:26.399960 | orchestrator | Add metadata to instances ---------------------------------------------- 24.81s 2025-10-09 11:06:26.399971 | orchestrator | Create test network topology ------------------------------------------- 15.77s 2025-10-09 11:06:26.399982 | orchestrator | Attach test volume ----------------------------------------------------- 13.66s 2025-10-09 11:06:26.399992 | orchestrator | Add member roles to user test ------------------------------------------ 12.58s 2025-10-09 11:06:26.400004 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.73s 2025-10-09 11:06:26.400022 | orchestrator | Create test volume ------------------------------------------------------ 6.61s 2025-10-09 11:06:26.400040 | orchestrator | Create floating ip address ---------------------------------------------- 5.93s 2025-10-09 11:06:26.400071 | orchestrator | Create ssh security group ----------------------------------------------- 5.28s 2025-10-09 11:06:26.400083 | orchestrator | Create test server group ------------------------------------------------ 4.58s 2025-10-09 11:06:26.400094 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.51s 2025-10-09 11:06:26.400104 | orchestrator | Create test-admin user -------------------------------------------------- 4.39s 2025-10-09 11:06:26.400115 | orchestrator | Create test user -------------------------------------------------------- 4.39s 2025-10-09 11:06:26.400126 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.23s 2025-10-09 11:06:26.400145 | orchestrator | Create test keypair ----------------------------------------------------- 4.18s 2025-10-09 11:06:26.400157 | orchestrator | Create icmp security group ---------------------------------------------- 4.14s 2025-10-09 11:06:26.400168 | orchestrator | Create test project ----------------------------------------------------- 4.14s 2025-10-09 11:06:26.400179 | orchestrator | Create test domain ------------------------------------------------------ 3.89s 2025-10-09 11:06:26.400190 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-10-09 11:06:26.751740 | orchestrator | + server_list 2025-10-09 11:06:26.751844 | orchestrator | + openstack --os-cloud test server list 2025-10-09 11:06:30.751834 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-09 11:06:30.751908 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-10-09 11:06:30.751914 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-09 11:06:30.751920 | orchestrator | | 0310286a-f2cf-4b16-8e0d-a384cb6690ff | test-4 | ACTIVE | auto_allocated_network=10.42.0.13, 192.168.112.137 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:06:30.751924 | orchestrator | | 41f471e2-743d-45b9-90a8-654755413940 | test-3 | ACTIVE | auto_allocated_network=10.42.0.55, 192.168.112.154 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:06:30.751929 | orchestrator | | ffa2c8eb-cf30-4001-b20e-e350025afc5d | test-2 | ACTIVE | auto_allocated_network=10.42.0.5, 192.168.112.144 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:06:30.751933 | orchestrator | | 408bf34d-776c-4af0-acc7-73e22ecd5fc3 | test-1 | ACTIVE | auto_allocated_network=10.42.0.25, 192.168.112.114 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:06:30.751937 | orchestrator | | 705f6d64-08ae-4522-829f-c8eaa430ad59 | test | ACTIVE | auto_allocated_network=10.42.0.4, 192.168.112.121 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:06:30.751942 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-09 11:06:31.066271 | orchestrator | + openstack --os-cloud test server show test 2025-10-09 11:06:34.532668 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:34.532795 | orchestrator | | Field | Value | 2025-10-09 11:06:34.532813 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:34.532843 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:06:34.532861 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:06:34.532873 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:06:34.532885 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-10-09 11:06:34.532896 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:06:34.532907 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:06:34.532937 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:06:34.532949 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:06:34.532961 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:06:34.532980 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:06:34.532991 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:06:34.533007 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:06:34.533019 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:06:34.533030 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:06:34.533041 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:06:34.533053 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:01:26.000000 | 2025-10-09 11:06:34.533071 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:06:34.533083 | orchestrator | | accessIPv4 | | 2025-10-09 11:06:34.533101 | orchestrator | | accessIPv6 | | 2025-10-09 11:06:34.533113 | orchestrator | | addresses | auto_allocated_network=10.42.0.4, 192.168.112.121 | 2025-10-09 11:06:34.533125 | orchestrator | | config_drive | | 2025-10-09 11:06:34.533143 | orchestrator | | created | 2025-10-09T11:00:50Z | 2025-10-09 11:06:34.533156 | orchestrator | | description | None | 2025-10-09 11:06:34.533169 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:06:34.533183 | orchestrator | | hostId | 86d3c7663fbe635372c013ff4e1683e217bb97e22779557309934e8a | 2025-10-09 11:06:34.533195 | orchestrator | | host_status | None | 2025-10-09 11:06:34.533215 | orchestrator | | id | 705f6d64-08ae-4522-829f-c8eaa430ad59 | 2025-10-09 11:06:34.533229 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:06:34.533249 | orchestrator | | key_name | test | 2025-10-09 11:06:34.533263 | orchestrator | | locked | False | 2025-10-09 11:06:34.533276 | orchestrator | | locked_reason | None | 2025-10-09 11:06:34.533294 | orchestrator | | name | test | 2025-10-09 11:06:34.533308 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:06:34.533321 | orchestrator | | progress | 0 | 2025-10-09 11:06:34.533334 | orchestrator | | project_id | c573bb29a2224c9f92ffda2c823ffece | 2025-10-09 11:06:34.533347 | orchestrator | | properties | hostname='test' | 2025-10-09 11:06:34.533368 | orchestrator | | security_groups | name='icmp' | 2025-10-09 11:06:34.533388 | orchestrator | | | name='ssh' | 2025-10-09 11:06:34.533401 | orchestrator | | server_groups | None | 2025-10-09 11:06:34.533414 | orchestrator | | status | ACTIVE | 2025-10-09 11:06:34.533428 | orchestrator | | tags | test | 2025-10-09 11:06:34.533441 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:06:34.533464 | orchestrator | | updated | 2025-10-09T11:05:04Z | 2025-10-09 11:06:34.533478 | orchestrator | | user_id | 7d57d7dce1f14cf591235ea146c39c26 | 2025-10-09 11:06:34.533491 | orchestrator | | volumes_attached | delete_on_termination='True', id='517faf29-e1c9-494e-b190-94cf0f21104e' | 2025-10-09 11:06:34.533502 | orchestrator | | | delete_on_termination='False', id='99d54ee0-1254-4a09-8dc8-391f48db0b46' | 2025-10-09 11:06:34.538063 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:34.836362 | orchestrator | + openstack --os-cloud test server show test-1 2025-10-09 11:06:38.146493 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:38.146623 | orchestrator | | Field | Value | 2025-10-09 11:06:38.146639 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:38.146659 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:06:38.146671 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:06:38.146682 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:06:38.146694 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-10-09 11:06:38.146705 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:06:38.146738 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:06:38.146767 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:06:38.146779 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:06:38.146791 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:06:38.146802 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:06:38.146817 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:06:38.146829 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:06:38.146840 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:06:38.146852 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:06:38.146897 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:06:38.146910 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:02:21.000000 | 2025-10-09 11:06:38.146929 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:06:38.146941 | orchestrator | | accessIPv4 | | 2025-10-09 11:06:38.146952 | orchestrator | | accessIPv6 | | 2025-10-09 11:06:38.146963 | orchestrator | | addresses | auto_allocated_network=10.42.0.25, 192.168.112.114 | 2025-10-09 11:06:38.146979 | orchestrator | | config_drive | | 2025-10-09 11:06:38.146991 | orchestrator | | created | 2025-10-09T11:01:46Z | 2025-10-09 11:06:38.147002 | orchestrator | | description | None | 2025-10-09 11:06:38.147014 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:06:38.147039 | orchestrator | | hostId | fa92b3b4ad9cc91902c64c0f53d35c1eda0856c0eaa01a3d7a735873 | 2025-10-09 11:06:38.147053 | orchestrator | | host_status | None | 2025-10-09 11:06:38.147073 | orchestrator | | id | 408bf34d-776c-4af0-acc7-73e22ecd5fc3 | 2025-10-09 11:06:38.147087 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:06:38.147100 | orchestrator | | key_name | test | 2025-10-09 11:06:38.147114 | orchestrator | | locked | False | 2025-10-09 11:06:38.147132 | orchestrator | | locked_reason | None | 2025-10-09 11:06:38.147145 | orchestrator | | name | test-1 | 2025-10-09 11:06:38.147158 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:06:38.147178 | orchestrator | | progress | 0 | 2025-10-09 11:06:38.147191 | orchestrator | | project_id | c573bb29a2224c9f92ffda2c823ffece | 2025-10-09 11:06:38.147205 | orchestrator | | properties | hostname='test-1' | 2025-10-09 11:06:38.147226 | orchestrator | | security_groups | name='icmp' | 2025-10-09 11:06:38.147240 | orchestrator | | | name='ssh' | 2025-10-09 11:06:38.147254 | orchestrator | | server_groups | None | 2025-10-09 11:06:38.147266 | orchestrator | | status | ACTIVE | 2025-10-09 11:06:38.147280 | orchestrator | | tags | test | 2025-10-09 11:06:38.147293 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:06:38.147314 | orchestrator | | updated | 2025-10-09T11:05:09Z | 2025-10-09 11:06:38.147327 | orchestrator | | user_id | 7d57d7dce1f14cf591235ea146c39c26 | 2025-10-09 11:06:38.147340 | orchestrator | | volumes_attached | delete_on_termination='True', id='ebee09aa-6d12-411e-93a9-331a6e30536e' | 2025-10-09 11:06:38.150442 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:38.453906 | orchestrator | + openstack --os-cloud test server show test-2 2025-10-09 11:06:41.719206 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:41.719321 | orchestrator | | Field | Value | 2025-10-09 11:06:41.719340 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:41.719373 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:06:41.719390 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:06:41.719424 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:06:41.719437 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-10-09 11:06:41.719448 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:06:41.719459 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:06:41.719489 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:06:41.719501 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:06:41.719513 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:06:41.719524 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:06:41.719602 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:06:41.719626 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:06:41.719637 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:06:41.719649 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:06:41.719660 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:06:41.719672 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:03:16.000000 | 2025-10-09 11:06:41.719691 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:06:41.719703 | orchestrator | | accessIPv4 | | 2025-10-09 11:06:41.719715 | orchestrator | | accessIPv6 | | 2025-10-09 11:06:41.719728 | orchestrator | | addresses | auto_allocated_network=10.42.0.5, 192.168.112.144 | 2025-10-09 11:06:41.719754 | orchestrator | | config_drive | | 2025-10-09 11:06:41.719775 | orchestrator | | created | 2025-10-09T11:02:41Z | 2025-10-09 11:06:41.719789 | orchestrator | | description | None | 2025-10-09 11:06:41.719802 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:06:41.719815 | orchestrator | | hostId | cac02ed79d4f3de25594d68adb576314e31f9f3b465b9783d21080e8 | 2025-10-09 11:06:41.719828 | orchestrator | | host_status | None | 2025-10-09 11:06:41.719848 | orchestrator | | id | ffa2c8eb-cf30-4001-b20e-e350025afc5d | 2025-10-09 11:06:41.719861 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:06:41.719874 | orchestrator | | key_name | test | 2025-10-09 11:06:41.719887 | orchestrator | | locked | False | 2025-10-09 11:06:41.719915 | orchestrator | | locked_reason | None | 2025-10-09 11:06:41.719929 | orchestrator | | name | test-2 | 2025-10-09 11:06:41.719942 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:06:41.719955 | orchestrator | | progress | 0 | 2025-10-09 11:06:41.719968 | orchestrator | | project_id | c573bb29a2224c9f92ffda2c823ffece | 2025-10-09 11:06:41.719981 | orchestrator | | properties | hostname='test-2' | 2025-10-09 11:06:41.720001 | orchestrator | | security_groups | name='icmp' | 2025-10-09 11:06:41.720016 | orchestrator | | | name='ssh' | 2025-10-09 11:06:41.720029 | orchestrator | | server_groups | None | 2025-10-09 11:06:41.720051 | orchestrator | | status | ACTIVE | 2025-10-09 11:06:41.720070 | orchestrator | | tags | test | 2025-10-09 11:06:41.720083 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:06:41.720095 | orchestrator | | updated | 2025-10-09T11:05:14Z | 2025-10-09 11:06:41.720106 | orchestrator | | user_id | 7d57d7dce1f14cf591235ea146c39c26 | 2025-10-09 11:06:41.720117 | orchestrator | | volumes_attached | delete_on_termination='True', id='3238809c-7e68-440b-886d-f29cb75ec2b3' | 2025-10-09 11:06:41.723944 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:41.998274 | orchestrator | + openstack --os-cloud test server show test-3 2025-10-09 11:06:45.082824 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:45.082931 | orchestrator | | Field | Value | 2025-10-09 11:06:45.082972 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:45.082984 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:06:45.083011 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:06:45.083023 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:06:45.083035 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-10-09 11:06:45.083046 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:06:45.083058 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:06:45.083087 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:06:45.083100 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:06:45.083120 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:06:45.083131 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:06:45.083143 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:06:45.083154 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:06:45.083166 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:06:45.083178 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:06:45.083189 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:06:45.083208 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:04:03.000000 | 2025-10-09 11:06:45.083227 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:06:45.083239 | orchestrator | | accessIPv4 | | 2025-10-09 11:06:45.083258 | orchestrator | | accessIPv6 | | 2025-10-09 11:06:45.083270 | orchestrator | | addresses | auto_allocated_network=10.42.0.55, 192.168.112.154 | 2025-10-09 11:06:45.083281 | orchestrator | | config_drive | | 2025-10-09 11:06:45.083298 | orchestrator | | created | 2025-10-09T11:03:37Z | 2025-10-09 11:06:45.083310 | orchestrator | | description | None | 2025-10-09 11:06:45.083321 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:06:45.083332 | orchestrator | | hostId | 86d3c7663fbe635372c013ff4e1683e217bb97e22779557309934e8a | 2025-10-09 11:06:45.083344 | orchestrator | | host_status | None | 2025-10-09 11:06:45.083362 | orchestrator | | id | 41f471e2-743d-45b9-90a8-654755413940 | 2025-10-09 11:06:45.083382 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:06:45.083394 | orchestrator | | key_name | test | 2025-10-09 11:06:45.083405 | orchestrator | | locked | False | 2025-10-09 11:06:45.083417 | orchestrator | | locked_reason | None | 2025-10-09 11:06:45.083432 | orchestrator | | name | test-3 | 2025-10-09 11:06:45.083444 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:06:45.083455 | orchestrator | | progress | 0 | 2025-10-09 11:06:45.083467 | orchestrator | | project_id | c573bb29a2224c9f92ffda2c823ffece | 2025-10-09 11:06:45.083478 | orchestrator | | properties | hostname='test-3' | 2025-10-09 11:06:45.083506 | orchestrator | | security_groups | name='icmp' | 2025-10-09 11:06:45.083518 | orchestrator | | | name='ssh' | 2025-10-09 11:06:45.083530 | orchestrator | | server_groups | None | 2025-10-09 11:06:45.083565 | orchestrator | | status | ACTIVE | 2025-10-09 11:06:45.083577 | orchestrator | | tags | test | 2025-10-09 11:06:45.083594 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:06:45.083606 | orchestrator | | updated | 2025-10-09T11:05:19Z | 2025-10-09 11:06:45.083617 | orchestrator | | user_id | 7d57d7dce1f14cf591235ea146c39c26 | 2025-10-09 11:06:45.083628 | orchestrator | | volumes_attached | delete_on_termination='True', id='32783ece-933c-4620-852e-7193c27cc3ed' | 2025-10-09 11:06:45.089166 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:45.446189 | orchestrator | + openstack --os-cloud test server show test-4 2025-10-09 11:06:48.720347 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:48.720449 | orchestrator | | Field | Value | 2025-10-09 11:06:48.720466 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:48.720480 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:06:48.720508 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:06:48.720520 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:06:48.720532 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-10-09 11:06:48.720602 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:06:48.720615 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:06:48.720673 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:06:48.720687 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:06:48.720699 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:06:48.720711 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:06:48.720723 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:06:48.720741 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:06:48.720754 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:06:48.720766 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:06:48.720778 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:06:48.720806 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:04:46.000000 | 2025-10-09 11:06:48.720827 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:06:48.720839 | orchestrator | | accessIPv4 | | 2025-10-09 11:06:48.720850 | orchestrator | | accessIPv6 | | 2025-10-09 11:06:48.720862 | orchestrator | | addresses | auto_allocated_network=10.42.0.13, 192.168.112.137 | 2025-10-09 11:06:48.720874 | orchestrator | | config_drive | | 2025-10-09 11:06:48.720892 | orchestrator | | created | 2025-10-09T11:04:21Z | 2025-10-09 11:06:48.720904 | orchestrator | | description | None | 2025-10-09 11:06:48.720916 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:06:48.720937 | orchestrator | | hostId | cac02ed79d4f3de25594d68adb576314e31f9f3b465b9783d21080e8 | 2025-10-09 11:06:48.720949 | orchestrator | | host_status | None | 2025-10-09 11:06:48.720969 | orchestrator | | id | 0310286a-f2cf-4b16-8e0d-a384cb6690ff | 2025-10-09 11:06:48.720981 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:06:48.720993 | orchestrator | | key_name | test | 2025-10-09 11:06:48.721005 | orchestrator | | locked | False | 2025-10-09 11:06:48.721017 | orchestrator | | locked_reason | None | 2025-10-09 11:06:48.721029 | orchestrator | | name | test-4 | 2025-10-09 11:06:48.721041 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:06:48.721063 | orchestrator | | progress | 0 | 2025-10-09 11:06:48.721075 | orchestrator | | project_id | c573bb29a2224c9f92ffda2c823ffece | 2025-10-09 11:06:48.721087 | orchestrator | | properties | hostname='test-4' | 2025-10-09 11:06:48.721107 | orchestrator | | security_groups | name='icmp' | 2025-10-09 11:06:48.721119 | orchestrator | | | name='ssh' | 2025-10-09 11:06:48.721583 | orchestrator | | server_groups | None | 2025-10-09 11:06:48.721601 | orchestrator | | status | ACTIVE | 2025-10-09 11:06:48.721613 | orchestrator | | tags | test | 2025-10-09 11:06:48.721625 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:06:48.721636 | orchestrator | | updated | 2025-10-09T11:05:23Z | 2025-10-09 11:06:48.721655 | orchestrator | | user_id | 7d57d7dce1f14cf591235ea146c39c26 | 2025-10-09 11:06:48.721667 | orchestrator | | volumes_attached | delete_on_termination='True', id='0a02d261-b462-4c58-ad3b-d1fea2eb3af9' | 2025-10-09 11:06:48.725269 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:06:49.034311 | orchestrator | + server_ping 2025-10-09 11:06:49.036142 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-09 11:06:49.036191 | orchestrator | ++ tr -d '\r' 2025-10-09 11:06:52.150625 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:06:52.150715 | orchestrator | + ping -c3 192.168.112.121 2025-10-09 11:06:52.167898 | orchestrator | PING 192.168.112.121 (192.168.112.121) 56(84) bytes of data. 2025-10-09 11:06:52.167925 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=1 ttl=63 time=8.46 ms 2025-10-09 11:06:53.164190 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=2 ttl=63 time=2.75 ms 2025-10-09 11:06:54.165695 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=3 ttl=63 time=2.44 ms 2025-10-09 11:06:54.165804 | orchestrator | 2025-10-09 11:06:54.165827 | orchestrator | --- 192.168.112.121 ping statistics --- 2025-10-09 11:06:54.165846 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:06:54.165864 | orchestrator | rtt min/avg/max/mdev = 2.436/4.549/8.460/2.768 ms 2025-10-09 11:06:54.165883 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:06:54.165901 | orchestrator | + ping -c3 192.168.112.144 2025-10-09 11:06:54.178436 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2025-10-09 11:06:54.178531 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=8.60 ms 2025-10-09 11:06:55.174388 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=2.60 ms 2025-10-09 11:06:56.175773 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=2.16 ms 2025-10-09 11:06:56.175863 | orchestrator | 2025-10-09 11:06:56.175879 | orchestrator | --- 192.168.112.144 ping statistics --- 2025-10-09 11:06:56.175892 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:06:56.175903 | orchestrator | rtt min/avg/max/mdev = 2.160/4.454/8.601/2.937 ms 2025-10-09 11:06:56.176252 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:06:56.176275 | orchestrator | + ping -c3 192.168.112.114 2025-10-09 11:06:56.188197 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2025-10-09 11:06:56.188239 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=7.50 ms 2025-10-09 11:06:57.185456 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.64 ms 2025-10-09 11:06:58.186097 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.87 ms 2025-10-09 11:06:58.186193 | orchestrator | 2025-10-09 11:06:58.186208 | orchestrator | --- 192.168.112.114 ping statistics --- 2025-10-09 11:06:58.186221 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:06:58.186232 | orchestrator | rtt min/avg/max/mdev = 1.866/4.003/7.499/2.492 ms 2025-10-09 11:06:58.186845 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:06:58.186873 | orchestrator | + ping -c3 192.168.112.137 2025-10-09 11:06:58.197620 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2025-10-09 11:06:58.197654 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=6.40 ms 2025-10-09 11:06:59.195771 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.37 ms 2025-10-09 11:07:00.196344 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.51 ms 2025-10-09 11:07:00.196626 | orchestrator | 2025-10-09 11:07:00.196646 | orchestrator | --- 192.168.112.137 ping statistics --- 2025-10-09 11:07:00.196659 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:07:00.196670 | orchestrator | rtt min/avg/max/mdev = 1.508/3.426/6.402/2.133 ms 2025-10-09 11:07:00.196694 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:07:00.196706 | orchestrator | + ping -c3 192.168.112.154 2025-10-09 11:07:00.208836 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-10-09 11:07:00.208871 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=6.39 ms 2025-10-09 11:07:01.207464 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=3.12 ms 2025-10-09 11:07:02.208248 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=2.31 ms 2025-10-09 11:07:02.208339 | orchestrator | 2025-10-09 11:07:02.208354 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-10-09 11:07:02.208367 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:07:02.208379 | orchestrator | rtt min/avg/max/mdev = 2.305/3.938/6.389/1.764 ms 2025-10-09 11:07:02.209113 | orchestrator | + [[ 9.4.0 == \l\a\t\e\s\t ]] 2025-10-09 11:07:02.443790 | orchestrator | ok: Runtime: 0:13:52.058650 2025-10-09 11:07:02.502930 | 2025-10-09 11:07:02.503089 | TASK [Run tempest] 2025-10-09 11:07:03.038728 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:03.055093 | 2025-10-09 11:07:03.055243 | TASK [Check prometheus alert status] 2025-10-09 11:07:03.590694 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:03.593743 | 2025-10-09 11:07:03.593948 | PLAY RECAP 2025-10-09 11:07:03.594110 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-10-09 11:07:03.594178 | 2025-10-09 11:07:03.808058 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-10-09 11:07:03.811928 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-10-09 11:07:04.554894 | 2025-10-09 11:07:04.555052 | PLAY [Post output play] 2025-10-09 11:07:04.570774 | 2025-10-09 11:07:04.570924 | LOOP [stage-output : Register sources] 2025-10-09 11:07:04.639555 | 2025-10-09 11:07:04.639841 | TASK [stage-output : Check sudo] 2025-10-09 11:07:05.570831 | orchestrator | sudo: a password is required 2025-10-09 11:07:05.677848 | orchestrator | ok: Runtime: 0:00:00.117065 2025-10-09 11:07:05.691487 | 2025-10-09 11:07:05.691638 | LOOP [stage-output : Set source and destination for files and folders] 2025-10-09 11:07:05.731346 | 2025-10-09 11:07:05.731614 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-10-09 11:07:05.799942 | orchestrator | ok 2025-10-09 11:07:05.809093 | 2025-10-09 11:07:05.809229 | LOOP [stage-output : Ensure target folders exist] 2025-10-09 11:07:06.230602 | orchestrator | ok: "docs" 2025-10-09 11:07:06.231031 | 2025-10-09 11:07:06.458270 | orchestrator | ok: "artifacts" 2025-10-09 11:07:06.696250 | orchestrator | ok: "logs" 2025-10-09 11:07:06.714628 | 2025-10-09 11:07:06.714891 | LOOP [stage-output : Copy files and folders to staging folder] 2025-10-09 11:07:06.753474 | 2025-10-09 11:07:06.753810 | TASK [stage-output : Make all log files readable] 2025-10-09 11:07:07.021450 | orchestrator | ok 2025-10-09 11:07:07.029180 | 2025-10-09 11:07:07.029293 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-10-09 11:07:07.063255 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:07.079035 | 2025-10-09 11:07:07.079183 | TASK [stage-output : Discover log files for compression] 2025-10-09 11:07:07.103042 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:07.114562 | 2025-10-09 11:07:07.114709 | LOOP [stage-output : Archive everything from logs] 2025-10-09 11:07:07.159807 | 2025-10-09 11:07:07.159981 | PLAY [Post cleanup play] 2025-10-09 11:07:07.169921 | 2025-10-09 11:07:07.170034 | TASK [Set cloud fact (Zuul deployment)] 2025-10-09 11:07:07.236707 | orchestrator | ok 2025-10-09 11:07:07.247870 | 2025-10-09 11:07:07.247980 | TASK [Set cloud fact (local deployment)] 2025-10-09 11:07:07.281711 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:07.297006 | 2025-10-09 11:07:07.297148 | TASK [Clean the cloud environment] 2025-10-09 11:07:08.266379 | orchestrator | 2025-10-09 11:07:08 - clean up servers 2025-10-09 11:07:09.003383 | orchestrator | 2025-10-09 11:07:09 - testbed-manager 2025-10-09 11:07:09.087589 | orchestrator | 2025-10-09 11:07:09 - testbed-node-0 2025-10-09 11:07:09.172760 | orchestrator | 2025-10-09 11:07:09 - testbed-node-1 2025-10-09 11:07:09.253526 | orchestrator | 2025-10-09 11:07:09 - testbed-node-2 2025-10-09 11:07:09.344085 | orchestrator | 2025-10-09 11:07:09 - testbed-node-5 2025-10-09 11:07:09.437355 | orchestrator | 2025-10-09 11:07:09 - testbed-node-4 2025-10-09 11:07:09.525989 | orchestrator | 2025-10-09 11:07:09 - testbed-node-3 2025-10-09 11:07:09.616317 | orchestrator | 2025-10-09 11:07:09 - clean up keypairs 2025-10-09 11:07:09.636519 | orchestrator | 2025-10-09 11:07:09 - testbed 2025-10-09 11:07:09.664282 | orchestrator | 2025-10-09 11:07:09 - wait for servers to be gone 2025-10-09 11:07:22.600137 | orchestrator | 2025-10-09 11:07:22 - clean up ports 2025-10-09 11:07:22.777175 | orchestrator | 2025-10-09 11:07:22 - 0be86678-2175-4faf-8104-e65f5fc1d882 2025-10-09 11:07:23.019452 | orchestrator | 2025-10-09 11:07:23 - 25ffb944-b5f0-4a24-b7a1-f99d18579047 2025-10-09 11:07:23.971009 | orchestrator | 2025-10-09 11:07:23 - 5e6e6946-35ea-4f5a-b366-4a709d0656e9 2025-10-09 11:07:24.213949 | orchestrator | 2025-10-09 11:07:24 - 78ccd999-b9cf-477d-812c-3e991c33c34e 2025-10-09 11:07:24.424581 | orchestrator | 2025-10-09 11:07:24 - 8542b6e5-743d-45a0-a5a8-6df2e7ebd9aa 2025-10-09 11:07:24.630792 | orchestrator | 2025-10-09 11:07:24 - a6e66cb4-f78c-430a-8747-6288581695d6 2025-10-09 11:07:24.839732 | orchestrator | 2025-10-09 11:07:24 - f47b4427-b371-4f4b-98de-3a665dd89dd2 2025-10-09 11:07:25.053926 | orchestrator | 2025-10-09 11:07:25 - clean up volumes 2025-10-09 11:07:25.162260 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-0-node-base 2025-10-09 11:07:25.206816 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-1-node-base 2025-10-09 11:07:25.244926 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-4-node-base 2025-10-09 11:07:25.287244 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-5-node-base 2025-10-09 11:07:25.328982 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-2-node-base 2025-10-09 11:07:25.370849 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-3-node-base 2025-10-09 11:07:25.417666 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-manager-base 2025-10-09 11:07:25.456235 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-0-node-3 2025-10-09 11:07:25.498108 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-5-node-5 2025-10-09 11:07:25.537407 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-8-node-5 2025-10-09 11:07:25.576219 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-4-node-4 2025-10-09 11:07:25.620754 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-3-node-3 2025-10-09 11:07:25.661545 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-7-node-4 2025-10-09 11:07:25.701144 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-1-node-4 2025-10-09 11:07:25.740504 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-2-node-5 2025-10-09 11:07:25.782294 | orchestrator | 2025-10-09 11:07:25 - testbed-volume-6-node-3 2025-10-09 11:07:25.823375 | orchestrator | 2025-10-09 11:07:25 - disconnect routers 2025-10-09 11:07:25.925744 | orchestrator | 2025-10-09 11:07:25 - testbed 2025-10-09 11:07:26.987630 | orchestrator | 2025-10-09 11:07:26 - clean up subnets 2025-10-09 11:07:27.027360 | orchestrator | 2025-10-09 11:07:27 - subnet-testbed-management 2025-10-09 11:07:27.183951 | orchestrator | 2025-10-09 11:07:27 - clean up networks 2025-10-09 11:07:27.356750 | orchestrator | 2025-10-09 11:07:27 - net-testbed-management 2025-10-09 11:07:27.697557 | orchestrator | 2025-10-09 11:07:27 - clean up security groups 2025-10-09 11:07:27.736623 | orchestrator | 2025-10-09 11:07:27 - testbed-node 2025-10-09 11:07:27.843511 | orchestrator | 2025-10-09 11:07:27 - testbed-management 2025-10-09 11:07:27.957258 | orchestrator | 2025-10-09 11:07:27 - clean up floating ips 2025-10-09 11:07:27.993171 | orchestrator | 2025-10-09 11:07:27 - 81.163.192.55 2025-10-09 11:07:28.372704 | orchestrator | 2025-10-09 11:07:28 - clean up routers 2025-10-09 11:07:28.942174 | orchestrator | 2025-10-09 11:07:28 - testbed 2025-10-09 11:07:29.856075 | orchestrator | ok: Runtime: 0:00:22.169240 2025-10-09 11:07:29.859541 | 2025-10-09 11:07:29.859683 | PLAY RECAP 2025-10-09 11:07:29.859812 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-10-09 11:07:29.859866 | 2025-10-09 11:07:29.993979 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-10-09 11:07:29.996399 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-10-09 11:07:30.735890 | 2025-10-09 11:07:30.736039 | PLAY [Cleanup play] 2025-10-09 11:07:30.751656 | 2025-10-09 11:07:30.751794 | TASK [Set cloud fact (Zuul deployment)] 2025-10-09 11:07:30.808149 | orchestrator | ok 2025-10-09 11:07:30.817917 | 2025-10-09 11:07:30.818062 | TASK [Set cloud fact (local deployment)] 2025-10-09 11:07:30.852299 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:30.868809 | 2025-10-09 11:07:30.868952 | TASK [Clean the cloud environment] 2025-10-09 11:07:31.979543 | orchestrator | 2025-10-09 11:07:31 - clean up servers 2025-10-09 11:07:32.443949 | orchestrator | 2025-10-09 11:07:32 - clean up keypairs 2025-10-09 11:07:32.461308 | orchestrator | 2025-10-09 11:07:32 - wait for servers to be gone 2025-10-09 11:07:32.504078 | orchestrator | 2025-10-09 11:07:32 - clean up ports 2025-10-09 11:07:32.575668 | orchestrator | 2025-10-09 11:07:32 - clean up volumes 2025-10-09 11:07:32.632774 | orchestrator | 2025-10-09 11:07:32 - disconnect routers 2025-10-09 11:07:32.665118 | orchestrator | 2025-10-09 11:07:32 - clean up subnets 2025-10-09 11:07:32.682998 | orchestrator | 2025-10-09 11:07:32 - clean up networks 2025-10-09 11:07:32.836780 | orchestrator | 2025-10-09 11:07:32 - clean up security groups 2025-10-09 11:07:32.870960 | orchestrator | 2025-10-09 11:07:32 - clean up floating ips 2025-10-09 11:07:32.892677 | orchestrator | 2025-10-09 11:07:32 - clean up routers 2025-10-09 11:07:33.410466 | orchestrator | ok: Runtime: 0:00:01.289529 2025-10-09 11:07:33.415790 | 2025-10-09 11:07:33.415954 | PLAY RECAP 2025-10-09 11:07:33.416074 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-10-09 11:07:33.416137 | 2025-10-09 11:07:33.530949 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-10-09 11:07:33.533273 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-10-09 11:07:34.257704 | 2025-10-09 11:07:34.257895 | PLAY [Base post-fetch] 2025-10-09 11:07:34.272916 | 2025-10-09 11:07:34.273046 | TASK [fetch-output : Set log path for multiple nodes] 2025-10-09 11:07:34.338794 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:34.354417 | 2025-10-09 11:07:34.354631 | TASK [fetch-output : Set log path for single node] 2025-10-09 11:07:34.404322 | orchestrator | ok 2025-10-09 11:07:34.412981 | 2025-10-09 11:07:34.413106 | LOOP [fetch-output : Ensure local output dirs] 2025-10-09 11:07:34.874613 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/17032c6985c94374a7f93c30fddd8d2c/work/logs" 2025-10-09 11:07:35.156443 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/17032c6985c94374a7f93c30fddd8d2c/work/artifacts" 2025-10-09 11:07:35.417130 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/17032c6985c94374a7f93c30fddd8d2c/work/docs" 2025-10-09 11:07:35.440236 | 2025-10-09 11:07:35.440397 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-10-09 11:07:36.351107 | orchestrator | changed: .d..t...... ./ 2025-10-09 11:07:36.351353 | orchestrator | changed: All items complete 2025-10-09 11:07:36.351391 | 2025-10-09 11:07:37.075512 | orchestrator | changed: .d..t...... ./ 2025-10-09 11:07:37.808138 | orchestrator | changed: .d..t...... ./ 2025-10-09 11:07:37.827156 | 2025-10-09 11:07:37.827284 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-10-09 11:07:37.859672 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:37.862673 | orchestrator | skipping: Conditional result was False 2025-10-09 11:07:37.888614 | 2025-10-09 11:07:37.888722 | PLAY RECAP 2025-10-09 11:07:37.888814 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-10-09 11:07:37.888854 | 2025-10-09 11:07:38.010345 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-10-09 11:07:38.013954 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-10-09 11:07:38.746619 | 2025-10-09 11:07:38.746810 | PLAY [Base post] 2025-10-09 11:07:38.761017 | 2025-10-09 11:07:38.761177 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-10-09 11:07:40.048704 | orchestrator | changed 2025-10-09 11:07:40.058629 | 2025-10-09 11:07:40.058748 | PLAY RECAP 2025-10-09 11:07:40.058935 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-10-09 11:07:40.059017 | 2025-10-09 11:07:40.174196 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-10-09 11:07:40.175418 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-10-09 11:07:40.965510 | 2025-10-09 11:07:40.965677 | PLAY [Base post-logs] 2025-10-09 11:07:40.975992 | 2025-10-09 11:07:40.976121 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-10-09 11:07:41.427993 | localhost | changed 2025-10-09 11:07:41.438112 | 2025-10-09 11:07:41.438257 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-10-09 11:07:41.463576 | localhost | ok 2025-10-09 11:07:41.466588 | 2025-10-09 11:07:41.466686 | TASK [Set zuul-log-path fact] 2025-10-09 11:07:41.481715 | localhost | ok 2025-10-09 11:07:41.490277 | 2025-10-09 11:07:41.490497 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-10-09 11:07:41.515148 | localhost | ok 2025-10-09 11:07:41.518255 | 2025-10-09 11:07:41.518355 | TASK [upload-logs : Create log directories] 2025-10-09 11:07:41.999007 | localhost | changed 2025-10-09 11:07:42.004955 | 2025-10-09 11:07:42.005137 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-10-09 11:07:42.466156 | localhost -> localhost | ok: Runtime: 0:00:00.006754 2025-10-09 11:07:42.474609 | 2025-10-09 11:07:42.474810 | TASK [upload-logs : Upload logs to log server] 2025-10-09 11:07:43.020718 | localhost | Output suppressed because no_log was given 2025-10-09 11:07:43.023590 | 2025-10-09 11:07:43.023813 | LOOP [upload-logs : Compress console log and json output] 2025-10-09 11:07:43.081019 | localhost | skipping: Conditional result was False 2025-10-09 11:07:43.084434 | localhost | skipping: Conditional result was False 2025-10-09 11:07:43.096229 | 2025-10-09 11:07:43.096411 | LOOP [upload-logs : Upload compressed console log and json output] 2025-10-09 11:07:43.150700 | localhost | skipping: Conditional result was False 2025-10-09 11:07:43.151070 | 2025-10-09 11:07:43.155668 | localhost | skipping: Conditional result was False 2025-10-09 11:07:43.167633 | 2025-10-09 11:07:43.167882 | LOOP [upload-logs : Upload console log and json output]